Securosis

Research

Incite 10/17/2012: Passion

One of the things about celebrating a birthday is the inevitable reflection. You can’t help but ask yourself: “Another year has gone by – am I where I’m supposed to be? Am I doing what I like to do? Am I moving in the right direction?” But what is that direction? How do you know? Adam’s post at Emergent Chaos about following your passion got me thinking about my own journey. The successes, the failures, the opportunities lost, and the long (mostly) strange trip it’s been. If you had told me 25 years ago as I was struggling through my freshman writing class that I’d make a living writing and that I’d like it, I’m actually not sure what my reaction would have been. I could see laughter, but I could also see nausea. And depending on when I got the feedback from that witch professor on whatever crap paper I submitted, I may have smacked you upside the head. But here I am. Writing every day. And loving it. So you never can tell where the path will lead you. As Adam says, try to resist the paint by numbers approach and chase what you like to do. I’ve seen it over and over again throughout my life and thankfully was smart enough to pay attention. My Dad left pharmacy when I was in 6th grade to go back to law school. He’s been doing the lawyer thing for 30+ years now and he still is engaged and learning new stuff every day. And even better, I can make countless lawyer jokes at his expense. My father in law has a similar story. He was in retail for 20+ years. Then he decided to become a stock broker because he was charting stocks in his spare time and that was his passion. He gets up every day and gets paid to do what he’d do anyway. That’s the point. If what you do feels like work all the time, you’re doing something wrong. I can envision telling my kids this story and getting the question in return: “OK Mr. Smart Guy, you got lucky and found your passion. How do I find mine?” That’s a great question and one without an easy answer. The only thing I’ve seen work consistently is to do lots of things and figure out what you like. Have you ever been so immersed that hours passed that felt like minutes? Or seconds? Sure, if you could figure out how to play Halo professionally that would be great. But that’s the point – be creative and figure out an opportunity to make money doing what you love. That’s easier said than done but it’s a lot better than a sharp stick in the eye working for people you can’t stand doing something you don’t like. Adam’s post starts with an excerpt from Cal Newport’s Follow a career passion?, which puts a different spin on why folks love their jobs: The alternative career philosophy that drove me is based on this simple premise: The traits that lead people to love their work are general and have little to do with a job’s specifics. These traits include a sense of autonomy and the feeling that you’re good at what you do and are having an impact on the world. It’s true. At least it has been for me. But my kids and everyone else need to earn this autonomy and gain proficiency at whatever job they are thrust into. Which is why I put such a premium on work ethic. You may not know what your passion is, but you can work your tail off as you find it. That seems to be a pretty good plan. –Mike Photo credits: Passion originally uploaded by Michael @ NW Lens Heavy Research We’re back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Defending Against Denial of Service (DoS) Attacks The Process Defense, Part 2: Applications Defense, Part 1: the Network Understanding and Selecting Identity Management for Cloud Services Introduction Securing Big Data Recommendations and Open Issues Operational Security Issues Incite 4 U It’s not groupthink. The problem is the checkbox: My pal Shack summarizes one of the talks he does at the IANS Forums in Infosec’s Most Dangerous Game: Groupthink. He talks about the remarkable consistency of most security programs and the controls implemented. Of course he’s really talking about the low bar set by compliance mandates, and how that checkbox mentality impacts how far too many folks think about security. So Dave busts out the latest management mental floss (The Lean Startup) and goes through some concepts to build your security program based on the iterative process used in a start-up. Build something, measure its success, learn from the data, and pivot to something more effective. It’s good advice, but be prepared for battle because the status quo machine (yea auditors, I’m looking at you) will stand in your when you try to do something different. That doesn’t mean it’s not the right thing to do, but it will be harder than it should. – MR Android gone phishin’: There’s always a lot of hype around mobile malware, in large part because AV vendors are afraid people won’t remember to buy their mobile products without a daily reminder of how hosed they are. (I kid). (Not really.) As much as I like to minimize the problem, mobile malware has been around for a while, but it tends to be extremely platform and region specific. For example, it’s a bigger deal in parts of Europe and Asia than North America, and until recently was very Symbian heavy. Now the FBI warns of phishing-based malware for Android. It’s hard to know the scope of the problem based on a report like

Share:
Read Post

New Series: Understanding and Selecting a Key Manager

Between new initiatives like cloud computing, and new mandates due to the continuous onslaught of compliance, managing encryption keys is moving from something only big banks worried about to something popping up among organizations of all sizes and shapes. Whether it is to protect customer data in a new web application or to ensure that a lost backup tape doesn’t force you to file a breach report, more and more organizations are encrypting more data in more places than ever before. And tying all of this together is the ever-present shadow of managing all those keys. In our Pragmatic Key Management for Data Encryption paper we highlighted some of the sins of the past that made key management painful, but showed how new strategies and tools can cut through those roadblocks to make key management a much more (for lack of a better word) manageable process. In the paper we identified four strategies for data encryption key management: Manage keys locally. Manage keys within a single application stack with a built-in key management feature. Manage keys for a silo using an external key management service/server/appliance, separate from the data and application stacks. Coordinate management of most or all keys across the enterprise with a centralized key management tool. We called these local, application stack, silo, and enterprise key management. Of those four strategies, the last two introduce a dedicated tool for key management. This series (and the eventual paper) will dig in to explain the major features and functions of a key manager, what to look for, and how to pick one that best fits your needs. *Why use a key manager?** Data encryption can be a tricky problem, especially at scale. Actually, all cryptographic operations can be tricky, but to keep our focus we will limit ourselves to encrypting data rather than digital signing, certificate management, and other uses of cryptography. The more diverse your keys, the better your security and granularity, but the higher the complexity. While rudimentary key management is built into a variety of products – including full disk encryption, backup tools, and databases – at some point many security professionals find they need a little more power than what’s embedded in the application stack. Some of the needs include: More robust reporting (especially for compliance). Better administrator monitoring and logging. Flexible options for key rotation and expiration. Management of keys across application components. Stronger security. Or sometimes, as with custom applications, there isn’t any existing key management to lean on. In these cases it makes sense to start looking at a dedicated key manager. In terms of use cases, some of the sweet spots we’ve found include: Backup encryption, due to a mix of longevity needs and very limited key management implementations in backup products themselves. Database encryption, because most database management systems only include the most rudimentary key management, and rarely the ability to centrally manage keys across different database instances or segregate keys from database administrators. Application encryption, which nearly always relies on a custom encryption implementation and, for security reasons, should separate key management from the application itself. Cloud encryption, due to the high volume of keys and variety of deployment scenarios. This is just to provide some context – many of you reading this probably already know you need a dedicated key manager. If you want more background on data encryption key management and when to move on to this category of tools you should read our other paper first, then hop back to this one. For the rest of you, the remaining posts in the series will cover technical features, management features, and how to choose between products. Share:

Share:
Read Post

Defending Against DoS Attacks: the Process

As we have mentioned throughout this series, a strong underlying process is your best defense against a Denial of Service (DoS) attack. Tactics change and the attack volumes increase, but if you don’t know what to do when your site goes down it will be down for a while. The good news is the DoS Defense process is a close relative to your general incident response process. We have already done a ton of research on the topic, so check out both our Incident Response Fundamentals series and our React Faster and Better paper. If your incident handling process isn’t where it needs to be, you should start there. Building off the IR process, think about what you need to do as a set of activities before, during, and after the attack: Before: Before an attack you spend time figuring out the triggers for an attack, and ensuring you perform persistent monitoring to ensure you have both sufficient warning and enough information to identify the root cause of the attack. This must happen before the attack, because you only get one chance to collect that data, while things are happening. In Before the Attack we defined a three step process for these activities: define, discover/baseline, and monitor. During: How can you contain the damage as quickly as possible? By identifying the root cause accurately and remediating effectively. This involves identifying the attack (Trigger and Escalate), identifying and mobilizing the response team (Size up), and then containing the damage in the heat of battle. During the Attack summarizes these steps. After: Once the attack has been contained focus shifts to restoring normal operations (Mop up) and making sure it doesn’t happen again (Investigation and Analysis). This involves a forensics process and some self-introspection described in After the Attack. But there are key differences when dealing with DoS so let’s amend the process a bit. We have already talked about what needs to happen before the attack, in terms of controls and architectures to maintain availability in the face of DoS attacks. That may involve network-based approaches, or focusing on the application layer – or more likely both. Before we jump into what needs to happen during the attack, let’s mention the importance of practice. You practice your disaster recovery plan, right? You should practice your incident response plan, and even a subset of that practice for DoS attacks. The time to discover the gaping holes in your process is not when the site is melting under a volumetric attack. That doesn’t mean to npblast yourself with 80gps of traffic either. But practice handoffs with the service provider, tuning the anti-DoS gear, and ensuring everyone knows their roles and accountability for the real thing. Trigger and Escalate There are a number of ways you can detect a DoS attack in progress. You could see increasing volumes or a spike in DNS traffic. Perhaps your applications get a bit flaky and fall down, or you see server performance issues. You might get lucky and have your CDN alert you to the attack (you set the CDN to alert on anomalous volumes, right?). Or more likely you’ll just lose your site. Increasingly these attacks tend to come out of nowhere in a synchronized series of activities targeting your network, DNS, and applications. We are big fans of setting thresholds and monitoring everything, but DoS is a bit different in that you may not see it coming despite your best efforts. Size up Now your site and/or servers are down, and all hell is likely breaking loose. So now you need to notify the powers that be, assemble the team, and establish responsibilities and accountabilities. You will also have your guys starting to dig into the attack. They’ll need to identify root cause, attack vectors, and adversaries, and figure out the best way to get the site back up. Restore There is considerable variability in what comes next. It depends on what network and application mitigations are in place. Optimally your contracted CDN and/or anti-DoS service provider already has a team working on the problem. If it’s an application attack, with a little tuning hopefully your anti-DoS appliance can block the attacks. Hope isn’t a strategy so you need plan B, which usually entails redirecting your traffic to a scrubbing center as we described in Network Defenses. The biggest decision you’ll face is when to actually redirect the traffic. If the site is totally down that decision is easy. If it’s an application performance issue (caused by an application or network attack), you need more information – particularly an idea of whether or not the redirection will even help. In many cases it will, since the service provider will then see the traffic and they likely have more expertise and can more effectively diagnose the issue, but there will be a lag as the network converges after changes. Finally, there is the issue of targeted organizations without contracts with a scrubbing center. In that case, your best bet is to cold call an anti-DoS provider and hope they can help you. These folks are in the business of fighting DoS, so they will likely be able to help, but do you want to take a chance on that? We don’t, so it makes sense to at least have a conversation with an anti-DoS provider before you are attacked – if only to understand their process and how they can help. Talking to a service provider doesn’t mean you need to contract for their service. It means you know who to call and what to do under fire. Mop up You have weathered the storm and your sites operate normally now. In terms of mopping up, you’ll shunt traffic from the scrubbing center and perhaps loosen up the anti-DoS appliance/WAF rules. You will keep monitoring for more signs of trouble, and probably want to grab a couple days sleep to catch up. Investigate and Analyze Once you are well rested, don’t fall into the trap of

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.