Securosis

Research

Bridging the Mobile Security Gap: Staring down Network Anarchy (new series)

No rest for the weary, it seems. As soon as we wrapped up last week’s blog series we start two more. Check out Rich’s new DLP series, and today I am starting to dig into the mobile security issue. We will also start up Phase 2 of the Malware Analysis Quant series this week. But don’t cry for us, Argentina. Being this busy is a good problem to have. We have seen plenty of vendor FUD (Fear, Uncertainty, and Doubt) about mobile security. And the concern isn’t totally misplaced. Those crazy users bring their own devices (yes, the consumerization buzzword) and connect them to your networks. They access your critical data and take that data with them. They lose their devices (or resell them, too often with data still on them), or download compromised apps from an app store, and those devices wreak havoc on your environment. It all makes your no-win your job even harder. Your increasing inability to enforce device standards or ingress paths further impairs your ability to secure the network and the information assets your organization deems important. Let’s call this situation what is: escalating anarchy. We know that’s a harsh characterization but we don’t know what else to call it. You basically can’t dictate the devices, have little influence of the configurations, must support connections from everywhere, and need to provide access to sensitive stuff. Yep, we stare down network anarchy on a daily basis. Before we get mired in feelings of futility, let’s get back to your charter as a network security professional. You need to make sure the right ‘people’ (which actually includes devices and applications) access the right stuff at the right times. Of course the powers that be don’t care whether you focus on devices or the network – they just want the problem addressed so they don’t have to worry about it. As long as the CEO can connect to the network and get the quarterly numbers on her iPad from a beach in the Caribbean it’s all good. What could possibly go wrong with that? Last year we documented a number of these mobile and consumerization drivers, and some ideas on network controls to address the issues, in the paper Network Security in the Age of Any Computing. That research centered around how to put some network controls in place to provide a semblance of order. Things like network segmentation and implementing a ‘vault’ architecture to ensure devices jump through a sufficient number of hoops before accessing important stuff. But that only scratched the surface of this issue. It’s like an iceberg – about 20% of the problems in supporting these consumer-grade devices are apparent. Unfortunately there is no single answer to this issue – instead you need a number of controls to work in concert, in order to offer some modicum of mobile device control. We need to orchestrate the full force of all the controls at our disposal to bridge this mobile security gap. In this series we will examine both device and network level tactics. Even better, we will pinpoint some of the operational difficulties inherent in making these controls work together, being sure to balance protection against usability. Before we jump into a short analysis of device-centric controls, it’s time to thank our friends at ForeScout for sponsoring this series. Without our sponsors we’d have no way to pay for coffee, and that would be a huge problem. Device-centric Controls When all you have is a hammer, everything looks like a nail, right? It seems like this has been the approach to addressing the security implications of consumerization. Folks didn’t really know what to do, so they looked at mobile device management (MDM) solutions as the answer to their problems. As we wrote in last year’s Mobile Device Security paper (PDF), a device-centric security approach starts with setting policies for who can have certain devices and what they can access. Of course your ability to say ‘no’ has eroded faster than your privacy on the Internet, so you’re soon looking at specific capabilities of the MDM platform to bail you out. Many organizations use MDM to enforce configuration policies, ensuring they can wipe devices remotely and routing traffic device traffic through a corporate VPN. This helps reduce the biggest risks. Completely effective? Not really, but you need to get through the day, and there have been few weaponized exploits targeting mobile devices, so the risk so far has been acceptable. But relying on MDM implicitly limits your ability to ensure the right folks get to the right stuff at the right time. You know – your charter as a network security professional. For instance, by focusing on the device you have no visibility into what the user is actually surfing to. The privacy modes available on most mobile browsers make sure there are no tracks left for those who want to, uh, do research on the Internet. Sure, you might be able to force them through a VPN, but the VPN provides a pass into your network and bypasses your perimeter defenses. Once an attacker is on the VPN with access to your network, they may as well be connected to the network port in your CEO’s office. Egress filtering, DLP, and content inspection can no longer monitor or restrict traffic to and from that mobile device. What about making sure the mobile devices don’t get compromised? You can check for malware on mobile devices but that has never worked very well for other endpoint devices, and we see no reason to think security vendors have suddenly solved the problems they have been struggling with for decades. You can also (usually) wipe devices if and when you realize they been compromised. But there is a window when the attacker may have unfettered access to your network, which we don’t like. Compounding these issues, focusing exclusively on devices provides no network traffic visibility. We advocate a Monitor Everything approach, which means you need watch the network for anomalous traffic, which might indicate an attacker in your midst. Device-centric solutions cannot provide that visibility. But this is

Share:
Read Post

Implementing and Managing a DLP Solution

I have been so tied up with the Nexus, CCSK, and other projects that I haven’t been blogging as much as usual
 but not to worry, it’s time to start a nice, juicy new technical series. And once again I return to my bread and butter: DLP. As much as I keep thinking I can simply run off and play with pretty clouds, something in DLP always drags me back in. This time it’s a chance to dig in and focus on implementation and management (thanks to McAfee for sponsoring something I’ve been wanting to write for a long time). With that said, let’s dig in
 In many ways Data Loss Prevention (DLP) is one of the most far-reaching tools in our security arsenal. A single DLP platform touches our endpoints, network, email servers, web gateways, storage, directory servers, and more. There are more potential integration points than nearly any other security tool – with the possible exception of SIEM. And then we need to build policies, define workflow, and implement blocking
 all based on nebulous concepts like “customer data” and “intellectual property”. It’s no wonder many organizations are intimidated by the thought implementing a large DLP deployment. Yet, based on our 2010 survey data, somewhere upwards of 40% of organizations use some form of DLP. Fortunately implementing and managing DLP isn’t nearly as difficult as many security professionals expect. Over the nearly 10 years we have covered the technology – talking with probably hundreds of DLP users – we have collected countless tips, tricks, and techniques for streamlined and effective deployments that we’ve compiled into straightforward processes to ease most potential pains. We are not trying to pretend deploying DLP is simple. DLP is one of the most powerful and important tools in our modern security arsenal, and anything with that kind of versatility and wide range of integration points can easily be a problem if you fail to appropriately plan or test. But that’s where this series steps in. We’ll lay out the processes for you, including different paths to meet different needs – all to help you get up and running; and to stay there as quickly, efficiently, and effectively as possible. We have watched the pioneers lay the trails and hit the land mines – now it’s time to share those lessons with everyone else. Keep in mind that despite what you’ve heard, DLP isn’t all that difficult to deploy. There are many misperceptions, in large part due to squabbling vendors (especially non-DLP vendors). But it doesn’t take much to get started with DLP. On a practical note this series is a follow-up to our Understanding and Selecting a Data Loss Prevention Solution paper now in its second revision. We pick up right where that paper left off, so if you get lost in any terminology we suggest you use that paper as a reference. On that note, let’s start with an overview and then we’ll delve into the details. Quick Wins for Long Term Success One of the main challenges in deploying DLP is to show immediate value without drowning yourself in data. DLP tools are generally not be too bad for false positives – certainly nowhere near as bad as IDS. That said, we have seen many people deploy these tools without knowing what they wanted to look for – which can result in a lot of what we call false real positives: real alerts on real policy violations, just not things you actually care about. The way to handle too many alerts is to deploy slowly and tune your policies, which can take a lot of time and may even focus you on protecting the wrong kinds of content in the wrong places. So we have compiled two separate implementation options: The Quick Wins process is best for initial deployments. Your focus is on rapid deployment and information gathering rather than enforcement, and will help guide your full deployment later. We detailed this process in a white paper and will only briefly review it here. The Full Deployment process is what you’ll use for the long haul. It’s a methodical series of steps for full enforcement policies. Since the goal is enforcement (even if enforcement is alert and response, instead of automated blocking and filtering), and we spend more time tuning policies to produce useful results. The key difference is that the Quick Wins process isn’t intended to block every single violation – just really egregious problems. It’s about getting up and running and quickly showing value by identifying key problem areas and helping set you up for a full deployment. The Full Deployment process is where you dig in, spend more time on tuning, and implement long-term policies for enforcement. The good news is that we designed these to work together. If you start with Quick Wins, everything you do will feed directly into full deployment. If you already know where you want to focus you can jump right into a full deployment without bothering with Quick Wins. In either case the process guides you around common problems and should speed up implementation. In our next post we’ll show you where to get started and start laying out the processes
 Share:

Share:
Read Post

The 2012 Disaster Recovery Breakfast

Really? It’s that time again? Time to prepare for the onslaught that is the RSA Conference. Well, we’re 5 weeks out, which means Clubber Lang was exactly right. My Prediction? Pain! Pain in your head, and likely a sick feeling in your stomach and ringing in your ears. All induced by an inability to restrain your consumption when surrounded by oodles of fellow security geeks and free drinks. Who said going to that party in the club with music at 110 decibels was a good idea? But rest easy – we’re here for you. Once again, with the help of our friends at ThreatPost, SchwartzMSL and Kulesa Faul, we will be holding our Disaster Recovery Breakfast to cure what ales you (or ails you, but I think my version is more accurate). As always, the breakfast will be Thursday morning from 8-11 at Jillian’s in the Metreon. It’s an open door – come and leave as you want. We’ll have food, beverages, and assorted recovery items to ease your day (non-prescription only). Remember what the DR Breakfast is all about. No marketing, no spin, just a quiet place to relax and have muddled conversations with folks you know, or maybe even go out on a limb and meet someone new. After three nights of RSA Conference shenanigans, it’s an oasis in a morass of hyperbole, booth babes, and tchotchke hunters. Invite below. See you there. To help us estimate numbers please RSVP to rsvp@securosis.com. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

“Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.”

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.