Securosis

Research

Heading to San Francisco

It’s a bit last minute, but I’ll be out in San Francisco next week for a panel at Oracle OpenWorld. I’m still working on my plans, but the panel is on Wednesday the 14th. I’m trying to decide how long to stay, so if you’re interested in meeting drop me a line… Share:

Share:
Read Post

Understanding And Selecting A DLP Solution: Part 7, The Selection Process

Welcome to the last part of our series on understanding and selecting a data loss prevention/content monitoring and filtering solution. Over the past 6 entries we’ve focused on the different components of solutions and the technologies that underlie them. Today, we’ll close the series with recommendations on how to run the selection process and pick the right solution for your organization. Part 1 Part 2 Part 3 Part 4 Part 5 Part 6 As we’ve discussed, DLP solutions can protect a wide range of data under a wide variety of circumstances, which makes DLP a particularly dangerous technology to acquire without the proper preparation. While there’s somewhat of a feature consensus among major players, how these features are implemented varies widely from vendor to vendor. I’ve also seen some organizations jump on the DLP bandwagon without having any idea how they’d like to use their new solution. In other cases, I’ve talked to clients complaining about high false positives while failing to turn on features in the product they’ve already bought that could materially improve accuracy. I’ve probably talked to over 100 organizations that have evaluated and deployed DLP, and based on their experiences I recommend a three phase selection process. Most of this is no different than the average procurement process, but there are a few extra recommendations specific to DLP- especially in the first phase. This process is skewed for larger organizations, so small to mid-size enterprises will need to scale back and adjust to match your resources. Define Needs and Prepare Your Organization Before you start looking at any tools, you need to understand why you need DLP, how you plan on using it, and the business processes around creating policies and managing incidents. Identify business units that need to be involved and create a selection committee: We tend to include two kinds of business units in the DLP selection process- content owners with sensitive data to protect, and content protectors with the responsibility for enforcing controls over the data. Content owners include those business units that hold and use the data. Content protectors tend to include departments like human resources, IT security, corporate legal, compliance, and risk management. Once you identify the major stakeholders, you’ll want to bring them together for the next few steps. Define what you want to protect: Start by listing out the kinds of data, as specifically as possible, that you plan on using DLP to protect. We typically break content out into three categories- personally identifiable information (PII, including healthcare, financial, and other data), corporate financial data, and intellectual property. The first two tend to be more structured and will drive you towards certain solutions, while IP tends to be less structured, bringing different content analysis requirements. Even if you want to protect all kinds of content, use this process to specify and prioritize, preferably on paper. Decide how you want to protect it and set expectations: In this step you will answer two key questions. First, in what channels/phases do you want to protect the data? This is where you decide if you just want basic email monitoring, or if you want comprehensive data-in-motion, data-at-rest, and data-in-use protection. I suggest you be extremely specific, listing out major network channels, data stores, and endpoint requirements. The second question is what kind of enforcement do you plan on implementing? Monitoring and alerting only? Email filtering? Automatic encryption? You’ll get a little more specific in the formalized requirements later, but you should have a good idea of your expectations at this point. Also, don’t forget that needs may change over time, and I recommend you break requirements into short term (within 6 months of deployment), mid-term (12-18 months after deployment), and long-term (up to 3 years after deployment). Roughly outline process workflow: One of the biggest stumbling blocks for a successful DLP deployment is failure to prepare the enterprise. In this stage you define your expected workflows for creating new protection policies and handling incidents involving insiders and external attackers. Which business units are allowed to request data to protect? Who is responsible for building the policies? When a policy is violated, what’s the workflow to remediate? When is HR notified? Corporate legal? Who handles day to day policy violations? Is it a technical security role, or non-technical, like a compliance officer? The answers to these kinds of questions will guide you towards different solutions that best meet your workflow needs. By the completion of this phase you will have defined key stakeholders, convened a selection team, prioritized the data you want to protect, determined where you want to protect it, and roughed out workflow requirements for building policies and remediating incidents. Formalize Requirements This phase can be performed by a smaller team working under a mandate from the selection committee. Here, the generic requirements determined in phase 1 are translated into specific technical requirements, while any additional requirements are considered. For example, any requirements for directory integration, gateway integration, data storage, hierarchical deployments, endpoint integration, and so on. Hopefully this series gives you a good idea of what to look for, and you can always refine these requirements after you proceed in the selection process and get a better feel for how the products work. At the conclusion of this stage you develop a formal RFI (Request For Information) to release to the vendors, and a rough RFP (Request For Proposals) that you’ll clean up and formally issue in the evaluation phase. Evaluate Products As with any products it’s sometimes difficult to cut through the marketing hype to figure out if a product really meets your needs. The following steps should minimize your risk and help you feel fully confident in your final decision: Issue the RFI: This is the procurement equivalent of pulling the trigger on a starting gun at the start of an Olympic 100 meter dash. Be prepared for all the sales calls. If you’re a smaller organization, start by sending your RFI to a trusted VAR and email

Share:
Read Post

It’s Official- Symantec Really Buying Vontu

From the press release: CUPERTINO, Calif. – November 5, 2007 – Symantec Corp. (Nasdaq: SYMC) today announced it has signed a definitive agreement to acquire Vontu, the leader in Data Loss Prevention (DLP) solutions, for $350 million, which will be paid in cash and assumed options. The acquisition is expected to close in the fourth calendar quarter of 2007, subject to receiving regulatory approvals and satisfaction of other customary closing conditions. I’ll post some analysis of all the M&A activity in DLP tomorrow; for now, I need to go off and finish my last post in the DLP series. Congrats to the Vontu team, and it will be really interesting to see if all the recent acquisitions finally give the DLP market a boost. Share:

Share:
Read Post

TidBITS Article on Leopard Up

You Apple geeks may have noticed I’ve been writing more over at TidBITS, that’s where I tend to put my less-technical Mac articles, especially those that aren’t about security. This week it’s more on the Leopard firewall. It’s less technical than my summary here, but goes into a little more depth. Overall I think it’s been blown a bit out of proportion- I don’t consider the firewall enough of an obstacle that you shouldn’t upgrade, but you do need to understand how it works and we all need to keep the pressure on Apple to clean it up. Share:

Share:
Read Post

Leopard Firewall + Code Signing Breaks Skype (And Other Applications)

I’m almost done with my deeper review of the firewall, but discovered something ugly in the process of podcasting and firewall testing. If you enable the firewall in the “Set access for specific services and applications” mode, Leopard digitally signs applications on launch that aren’t already signed via Apple’s mechanism. If that application happens to change during runtime, as Skype seems to, the signature no longer matches and the application won’t run. There are no dialogs or warnings- the icon just dances on the dock for a few bounces then disappears. I went to podcast last night and had this happen. Reinstalling it fixed the problem, but then it hit again today. I looked in my console and saw the following: Nov 1 16:09:34 CrashBook [0x0-0x27027].com.skype.skype[387]: Check 1 failed. Can’t run Skype Googling that error returns some threads in Skype forums that indicate this is a known issue related to the firewall and code signing. A reinstall fixes it, but this is, obviously, a bit of a problem. I’m somewhat surprised this hasn’t made the rounds yet. Share:

Share:
Read Post

Investigating the Leopard Firewall

Updated: See http://securosis.com/2007/11/15/ipfw-rules/. I just spent entirely too much time digging into the Leopard firewall, and here’s what I’ve found. The less geeky version will be out on TidBITS (probably tomorrow); this is just the summary of actual behavior: “Allow all incoming connections” allows all- no surprises. In all firewall modes, if you don’t select Stealth mode, mDNS (Bonjour, 5353/udp) is open on a port scan. “Block all incoming connections” does seem to block actual connections, but any shared ports are detected as “open/filtered” on a port scan. In “Block all” mode with stealth mode enabled, those shared services no longer show on a port scan. Once you connect to another computer (outbound), Kerberos (88/tcp) is open and stays open no matter what you change on the firewall, including enabling stealth mode. This disappears on reboot. Other services may exhibit this behavior. If you choose “Set access for specific services and applications”, any time you launch a program which starts a listner, the system automatically pokes a hole in the firewall to reach it listeners, but only those in the Sharing preferences pane appear in the list of services. This rather defeats the purpose of the firewall, since any listener is automatically accessible! That mode is labeled differently in the help file than on the screen. In the help file, it’s “Limit incoming connections to specific services and applications”. Just a nit, but that seems clearer to me. At least they warn us if you dig into the help: IMPORTANT: Some programs have access through the firewall although they don’t appear in the list. These might include system applications, services, and processes (for example, those running as “root”). They can also include digitally signed programs that are opened automatically by other programs. You might be able to block these programs” access through the firewall by adding them to the list. “Set access” mode seems incredibly inconsistent- some applications require you to authorize network connectivity on launch, and others don’t. For example, Skype and Firefox asked me for access, but Colloquy and Twitteriffic didn’t. If you are asked to authorize an application and let it connect to the network, the binary is automatically signed by the system if it wasn’t already. If that application changes, it breaks and won’t launch. You get no warning or indication that this is why your program no longer works. I only stumbled across an oblique reference in the console. If you open Sharing, but set “Block all”, your computer still appears on the network via mDNS, but no one can connect. Annoying. I feel like I’m missing something, but I think that’s it. In short, block mode seems to block inbound connections but ports show as open/filtered. Stealth mode works, partially, but some ports still show on a port scan no matter what (like Kerberos). Bonjour is ALWAYS accessible, unless you’re in stealth mode. Application (“Set access…”) mode is a mess- code signing breaks applications, and the behavior is inconsistent. Any launched services are authorized and you can’t change the settings in the firewall GUI. The good news is that ipfw is still enabled and you can manually configure it or use a GUI like WaterRoof. Looking at how all this works I can see what Apple was thinking, even though they made many bad decisions. When block all is enabled it does seem to block most traffic, but instead of leaving ports open/filtered it should close them, not show them as filtered (I suppose not everyone will agree; feel free to say so in the comments). Stealth works, mostly. It’s hard to tell without playing more, but I think the Kerberos issue is related to outbound services. I suspect (thinking back to how Kerberos works) that it must open an outbound port to authenticate a session when you connect to a remote server. The firewall allows this since it was initiated locally (thus implicitly trusted), but the Kerberos implementation probably doesn’t tear down the port when it’s finished and the firewall still sees it as authorized for return traffic. Just a guess, but this could also explain some behavior noted elsewhere. This should address the findings in the heise security article which inspired this research. They just seemed to miss enabling stealth mode and I’ve added a bunch more on how application control works. I’m done with the firewall for now- it took far too long to run all the scans in all the different modes just to come up with a few bullet points! Share:

Share:
Read Post

The Insider Threat Will Eat Your Babies

I was reading this post by Richard Bejtlich and it reminded me of a little pet peeve. It seems some people out there criticize Richard for focusing more on external threats than the big bad, “internal threat”. I’ll admit I used to use the term frequently when I was a little naive, but I finally realized it became code for “scary stuff you’ll never be able to protect yourself from without spending a lot of money on our products.” Yes, there is an insider threat, but we abuse the heck out of the term. There are a few principles I like to keep in mind when discussing the insider threat. Some are a little redundant to make a point from a slightly different perspective: Once an external attacker penetrates perimeter security and/or compromises a trusted user account, they become the insider threat. Thus, from a security controls perspective it often makes little sense to distinguish between the insider threat and external attackers- there are those with access to your network, and those without. Some are authorized, some aren’t. The best defenses against malicious employees are often business process controls, not security technologies. The technology cost to reduce the risks of the insider threat to levels comparable to the external threat are materially greater without business process controls. The number of potential external attackers is the population of the Earth with access to a computer. The number of potential malicious employees is no greater than the total number of employees. If you allow contractors and partners the same access to your network and resources as your employees, but fail to apply security controls to their systems, you must assume they are compromised. Detective controls with real-time alerting and an efficient incident response process are usually more effective for protecting internal systems than preventative technology controls, which more materially increase the overall business cost by interfering with business processes. Preventative controls built into the business process are more efficient than external technological preventative controls. Thus, the best strategy includes a mix of technology and business controls, a focus on preventing and detecting external attacks, and reliance on a mix of preventative controls and detective controls with efficient response for the insider threat. I really don’t care if an attacker is internal or external once they get onto a single trusted system or portion of my network. The “insider threat” isn’t a threat. It’s become a blanket term for FUD. Understand the differences between malicious employees, careless employees, external attackers with access inside the perimeter, and trusted partners without effective controls on their systems and activities. Share:

Share:
Read Post

Good SSL Resources, And A Congrats To Chris Pepper

From Chris Pepper: His TidBITS article on SSL A post on some handy commands Chris is my first resource when I need help with the command line. On a separate note, Chris managed to hit his goal of 1500 bug reports before the release of Leopard. Very cool. Share:

Share:
Read Post

Short DLP Article Up At Network World

Just a quick note that I have a short article up on Network World on DLP. I answered the question, “With all the recent news about acquisitions in the DLP space, I’m unsure if now is the time to select a solution or if I should wait. How can I tell the right time to get into DLP?” A short clip: The decision to invest in Data Loss Prevention (DLP) should be based on how ready you are as an organization, not the internal wranglings of a young market in the midst of a growth spurt. I like to describe DLP as an adolescent market- it’s one that provides high value even though the market and the solutions aren’t as mature as some other areas of technology. (Full disclosure- I was connected to Network World by Reconnex, but I don’t currently have a business relationship with them and I was not paid by anyone for the article). Share:

Share:
Read Post

Network Security Podcast, Episode 82: The Scary Halloween/Mac Episode

Okay, it’s not that scary, other than the fact Martin isn’t even in the episode this week. That’s right, I flew solo and invited Glenn Fleishman from TidBITS and Wi-Fi Networking News to join me in an episode dedicated to the security issues around the release of Mac OS X 10.5 Leopard. Glenn Fleishman is a TidBITS contributing editor and a Seattle journalist who covers technology for publications like The New York Times, Popular Science, and The Economist. He blogs daily about Wi-Fi and other wireless networking at Wi-Fi Networking News. Glenn lives in Seattle with his wife Lynn, sons Ben and Rex, two iPhones, and a dozen Macs of various vintages. This is one of the most significant updates to the OS X series of the Mac operating system, with more dedicated security updates than any other version. But although Apple clearly invested in security, they didn’t necessarily finish the job. A combination of incomplete security feature implementations and some new operating system features with security implications make this a release for us security geeks to keep our eyes on. Show Notes: Rich’s pre-release TidBITS article on Security Improvements in Leopard Thomas Ptacek’s article evaluating the Leopard security features, post-release The ISFYM (Internet Security For Your Mac) post on Back to My Mac security problems by Open Door Networks Leopard firewall article from Heise Security Rich’s follow up article on Leopard Security Network Security Podcast, Episode 82, October 31, 2007 Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.