Securosis

Research

Mogull’s Law

I’m about to commit the single most egotistical act of my blogging/analyst career. I’m going to make up my own law and name it after myself. Hopefully I’m almost as smart as everyone says I think I am. I’ve been talking a lot, and writing a bit, about the intersection of security and psychology in security. One example is my post on the anonymization of losses, and another is the one on noisy vs. quiet security threats. Today I read a post by RSnake on the effectiveness of user training and security products, which was inspired by a great paper from Microsoft: So Long, And No Thanks for the Externalities: The Rational Rejection of Security Advice by Users. I think we can combine these thoughts into a simple ‘law’: The rate of user compliance with a security control is directly proportional to the pain of the control vs. the pain of non-compliance. We need some supporting definitions: Rate of compliance equals the probability the user will follow a required security control, as opposed to ignoring or actively circumventing said control. The pain of the control is the time added to an established process, and/or the time to learn and implement a new process. The pain of non-compliance includes the consequences (financial, professional, or personal) and the probability of experiencing said consequences. Consequences exist on a spectrum – with financial as the most impactful, and social as the least. The pain of non-compliance must be tied to the security control so the user understands the cause/effect relationship. I could write it out as an equation, but then we’d all make up magical numbers instead of understanding the implications. Psychology tells us people only care about things which personally affect them, and fuzzy principles like “the good of the company” are low on the importance scale. Also that immediate risks hold our attention far more than long-term risks; and we rapidly de-prioritize both high-impact low-frequency events, and high-frequency low-impact events. Economics teaches us how to evaluate these factors and use external influences to guide widescale behavior. Here’s an example: Currently most security incidents are managed out of a central response budget, as opposed to business units paying the response costs. Economics tells us that we can likely increase the rate of compliance with security initiatives if business units have to pay for response costs they incur, thus forcing them to directly experience the pain of a security incident. I suspect this is one of those posts that’s going to be edited and updated a bunch based on feedback… Share:

Share:
Read Post

LHF: Quick Wins with DLP—the Conclusion

In the last two posts we covered the main preparation you need to get quick wins with your DLP deployment. First you need to put a basic enforcement process in place, then you need to integrate with your directory servers and major infrastructure. With these two bits out of the way, it’s time to roll up our sleeves, get to work, and start putting that shiny new appliance or server to use. The differences between a long-term DLP deployment and our “Quick Wins” approach are goals and scope. With a traditional deployment we focus on comprehensive monitoring and protection of very specific data types. We know what we want to protect (at a granular level) and how we want to protect it, and we can focus on comprehensive policies with low false positives and a robust workflow. Every policy violation is reviewed to determine if it’s an incident that requires a response. In the Quick Wins approach we are concerned less about incident management, and more about gaining a rapid understanding of how information is used within our organization. There are two flavors to this approach – one where we focus on a narrow data type, typically as an early step in a full enforcement process or to support a compliance need, and the other where we cast a wide net to help us understand general data usage to prioritize our efforts. Long-term deployments and Quick Wins are not mutually exclusive – each targets a different goal and both can run concurrently or sequentially, depending on your resources. Remember: even though we aren’t talking about a full enforcement process, it is absolutely essential that your incident management workflow be ready to go when you encounter violations that demand immediate action! Choose Your Flavor The first step is to decide which of two general approaches to take: Single Type: In some organizations the primary driver behind the DLP deployment is protection of a single data type, often due to compliance requirements. This approach focuses only on that data type. Information Usage: This approach casts a wide net to help characterize how the organization uses information, and identify patterns of both legitimate use and abuse. This information is often very useful for prioritizing and informing additional data security efforts. Choose Your Deployment Type Depending on your DLP tool, it will be capable of monitoring and protecting information on the network, on endpoints, or in storage repositories – or some combination of these. This gives us three pure deployment options and four possible combinations. Network Focused: Deploying DLP on the network in monitoring mode provides the broadest coverage with the least effort. Network monitoring is typically the fastest to get up and running due to lighter integration requirements. You can often plug in a server or appliance over a few hours or less, and instantly start evaluating results. Endpoint Focused: Starting with endpoints should give you a good idea of which employees are storing data locally or transferring it to portable storage. Some endpoint tools can also monitor network activity on the endpoint, but these capabilities vary widely. In terms of Quick Wins, endpoint deployments are generally focused on analyzing stored content on the endpoints. Storage Focused: Content discovery is the analysis of data at rest in storage repositories. Since it often requires considerable integration (at minimum, knowing the username and password to access a file share), these deployments, like endpoints, involve more effort. That said, it’s scan major repositories is very useful, and in some organizations it’s as important (or even more so) to understand stored data than to monitor information moving across the network. Network deployments typically provide the most immediate information with the lowest effort, but depending on what tools you have available and your organization’s priorities, it may make sense to start with endpoints or storage. Combinations are obviously possible, but we suggest you roll out multiple deployment types sequentially rather than in parallel to manage project scope. Define Your Policies The last step before hitting the “on” switch is to configure your policies to match your deployment flavor. In a single type deployment, either choose an existing category that matches the data type in your tool, or quickly build your own policy. In our experience, pre-built categories common in most DLP tools are almost always available for the data types that commonly drive a DLP project. Don’t worry about tuning the policy – right now we just want to toss it out there and get as many results as possible. Yes, this is the exact opposite of our recommendations for a traditional, focused DLP deployment. In an information usage deployment, turn on all the policies or enable promiscuous monitoring mode. Most DLP tools only record activity when there are policy violations, which is why you must enable the policies. A few tools can monitor general activity without relying on a policy trigger (either full content or metadata only). In both cases our goal is to collect as much information as possible to identify usage patterns and potential issues. Monitor Now it’s time to turn on your tool and start collecting results. Don’t be shocked – in both deployment types you will see a lot more information than in a focused deployment, including more potential false positives. Remember, you aren’t concerned with managing every single incident, but want a broad understanding of what’s going on on your network, in endpoints, or in storage. Analyze and PROFIT! Now we get to the most important part of the process – turning all that data into useful information. Once we collect enough data, it’s time to start the analysis process. Our goal is to identify broad patterns and identify any major issues. Here are some examples of what to look for: A business unit sending out sensitive data unprotected as part of a regularly scheduled job. Which data types broadly trigger the most violations. The volume of usage of certain content or files, which may help identify valuable assets that don’t cleanly match a pre-defined policy. Particular users or business units with higher numbers of

Share:
Read Post

Incite 3/17/2010: Seeing the Enemy

“WE HAVE MET THE ENEMY AND HE IS US.” POGO (1970) I’ve worked for companies where we had to spend so much time fighting each other, the market got away. I’ve also worked at companies where internal debate and strife made the organization stronger and the product better. But there are no pure absolutes – as much as I try to be binary, most companies include both sides of the coin. But when I read of the termination of Pennsylvania’s CISO because he dared to actually talk about a breach, it made me wonder – about everything. Dennis hit the nail on the head, this is bad for all of us. Can we be successful? We all suffer from a vacuum of information. That was the premise of Adam Shostack and Andrew Stewart’s book The New School of Information Security. That we need to share information, both good and bad, flattering and unflattering – to make us better at protecting stuff. Data can help. Unfortunately most of the world thinks that security through obscurity is the way to go. As Adrian pointed out in Monday’s FireStarter, there isn’t much incentive to disclose anything unless an organization must – by law. The power of negative PR grossly outweighs the security benefit of information sharing. Which is a shame. So what do you do? Give up? Well, actually maybe you do give up. Not on security in general, but on your organization. Every day you need to figure out if you can overcome the enemy within your four walls. If you can’t, then move on. I know, now is the wrong time to leave a job. I get that. But how long can you go in every day and get kicked in the teeth? Only you can decide that. But if your organization is a mess, don’t wait for it to get better. If you do decide to stay, you need to discover the power of the peer group. Your organization will not sanction it, and don’t blame me, but find a local or industry group of peeps where you can share your dirt. You take a blood oath (just like in grade school) that what is spoken about in the group stays within the group and you spill the beans. You learn from what your peers have done, and they learn from you. At this point we must acknowledge that widespread information sharing is not going to happen. Which sucks, but it is what it is. So we need to get creative and figure out an alternative means to get the job done. Find your peeps and learn from them. – Mike. Photo credit: “Pogo – Walt Kelly (1951) – front cover” originally uploaded by apophysis_rocks Incite 4 U Time to study marketing too… – RSnake is starting to mingle with some shady characters. Well, maybe not shady, but certainly on the wrong side of the rule of law. One of his conclusions is that it’s getting harder for the bad guys to do their work, at least the work of compromising meaty valuable targets. That’s a good thing. But the black hats are innovative and playing for real money, so they will figure something out and their models will evolve to continue generating profits. It’s the way of the capitalist. This idea of assigning a much higher value to a zombie within the network of a target makes perfect sense. It’s no different than how marketing firms charge a lot more for leads directly within the target market. So it’s probably not a bad idea for us security folks to study a bit of marketing, which will tell us how the bad guys will evolve their tactics. – MR Lies, Damn Lies, and Exploits – We’ve all been hearing a ton about that new “Aurora” exploit (mostly because of all the idiots who think it’s the same thing as APT), but NSS Labs took a pretty darn interesting approach to all the hype. Assuming that every anti-malware vendor on the market would block the known Aurora exploit, they went ahead and tested the major consumer AV products against fully functional variants. NSS varied both the exploit and the payload to see which tools would still block the attack. The results are uglier than a hairless cat with a furball problem. Only one vendor (McAfee) protected against all the variants, and some (read the report yourself) couldn’t handle even the most minor changes. NSS is working on a test of the enterprise versions, but I love when someone ignites the snake oil. – RM I hate C-I-A – Confidentiality, Integrity, and Availability is what it stands for. I was reminded of this reading this CIA Triad Post earlier today. Every person studying for their CISSP is taught that this is how they need to think about security. I always felt this was BS, along with a lot of other stuff they teach in CISSP classes, but that’s another topic. CIA just fails to capture the essence of security. Yeah, I have to admit that CIA represents three handy buckets that can compartmentalize security events, but they so missed the point about how one should approach security that I have become repulsed by the concept. Seriously, we need something better. Something like MSB. Misuse-Spoof-Break. Do something totally unintended, do something normal pretending to be someone else, or change something. Isn’t that a better way to think about security threats? It’s the “What can we screw with next?” triad. And push “denial of service” to the back of your mind. Script kiddies used to think it was fun, and some governments still do, but when it comes to hacking, it’s nothing more than a socially awkward cousin of the other three. – AL Signatures in burglar alarm clothing – Pauldotcom, writing with his Tenable hat on, explains a method he calls “burglar alarms,” as a way to deflate some APT hype. This method ostensibly provides a heads-up on attacks we haven’t seen before. He uses this as yet another example of how to detect an APT. I know I’m not the sharpest tool in the shed, but I don’t

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.