Securosis

Research

[New White Paper] Understanding and Selecting Data Masking Solutions

Today we are launching a new research paper on Understanding and Selecting Data Masking Solutions. As we spoke with vendors, customers, and data security professionals over the last 18 months, we felt big changes occurring with masking products. We received many new customer inquires regarding masking, often for use cases outside the classic normal test data creation. We wanted to discuss these changes and share what we see with the community. Our goal has been to ensure the research addresses common questions from both technical and non-technical audiences. We did our best to cover the business applications of masking in a non-technical, jargon-free way. Not everyone who is interested in data security has a degree in data management or security, so we geared the first third of the paper to problems you can reasonably expect to solve with masking technologies. Those of you interested in the nut and bolts need not fear – we drill into the myriad of technical variables later in the paper. The following except offers an overview of what the paper covers: Data masking technology provides data security by replacing sensitive information with a non-sensitive proxy, but doing so in such a way that the copy looks – and acts – like the original. This means non-sensitive data can be used in business processes without changing the supporting applications or data storage facilities. You remove the risk without breaking the business! In the most common use case, masking limits the propagation of sensitive data within IT systems by distributing surrogate data sets for testing and analysis. In other cases, masking will dynamically provide masked content if a user’s request for sensitive information is deemed ‘risky’. We are particularly proud of this paper – it is the result of a lot of research, and it took a great deal of time to refine the data. We are not aware of any other research paper that fully captures the breadth of technology options available, or anything else that discusses evolving uses for the technology. With the rapid expansion of the data masking market, many people looking for a handle on what’s possible with masking, and that convinced us on to do an deep research paper. We quickly discovered a couple of issues when we started the research. Masking is such a generic term that most people think they have a handle on how it works, but it turns out they are typically aware of only a small sliver of the available options. Additionally, the use cases for masking have grown far beyond creating test data, evolving into a general data protection and management framework. As the masking techniques and deployment options evolve we see a change in the vocabulary to describe the variation. We hope this research will enhance your understanding of masking systems. Finally, we would like to thank those companies who chose to sponsor this research: IBM and Informatica. Without sponsors like these who contribute to the work we do, we could not offer this quality research free of charge to the community. Please visit their sites to download the paper, or you can find a copy in our research library: Understanding and Selecting Data Masking Solutions. Share:

Share:
Read Post

Pragmatic WAF Management: Application Lifecycle Integration

As we have mentioned throughout this series, the purpose of a WAF is to protect web facing applications from attacks. We can debate build-security-in versus bolt-security-on ad infinitum, but ultimately the answer is both. In the last post we discussed how to build and maintain WAF policies to protect applications, but you also need to adapt your development process to incorporate knowledge of typical attack tactics into code development practices to address application vulnerabilities over time. This involves a two-way discussion between WAF administrators and developers. Developers do their part helping security folks understand applications, what input values should look like, and what changes are expected in upcoming releases. This ensures the WAF rules remain in sync with the application. Granted, not every WAF user will want to integrate their application development and WAF management processes. But separation limits the effectiveness of both the WAF and puts the application at risk. At a minimum, developers (or the DevOps group) should have ongoing communications with WAF managers to avoid having the WAF complicate application deployment and to keep the WAF from interfering with normal application functions. This collaboration is critical, so let’s dig into how this lifecycle should work. Web applications change constantly, especially given increasingly ‘agile’ development teams – some pushing web application changes multiple times per week. In fact many web application development teams don’t even attempt to follow formal release cycles – effectively running an “eternal beta cycle”. The team’s focus and incentives are on introducing new features as quickly as possible to increase customer engagement. But this doesn’t help secure applications, and poses a serious challenge efforts to keep WAF policies current and effective. The greater the rate of application change, the harder it is to maintain WAF policies. This simple relationship seriously complicates one of the major WAF selling points: its ability to implement positive security policies based on acceptable application behavior. As we explained in the last post, whitelist policies enumerate acceptable commands and their associated parameters. The WAF learns about the web applications it protects by monitoring user activity and/or by crawling application pages, determining which need protection, which serve static and/or dynamic content, the data types and value ranges for page variables, and other aspects of user sessions. If the application undergoes constant change, the WAF will always be behind, introducing a risky gap between learning and protecting. New application behavior isn’t reflected in WAF policies, which means some new legitimate requests will blocked, and some illegal requests will be ignored. That makes WAF much less useful. To mitigate these issues we have identified a set of critical success factors for integrating with the SDLC (software development lifecycle). When we originally outlined this process, we mentioned the friction between developers and security teams and how this adversely effects their working relationship. Our goal here is to help set you on the right path and prevent the various groups from feuding like the Hatfields and McCoys – trust us, it happens all to often. Here’s what we recommend: Executive Sponsorship: If, as an organization, you can’t get developers and operations in the room with security, the WAF administrators are stuck on a deserted island. It’s up to them to figure out what each application is supposed to do and how it should be protected, and cannot keep up without sufficient insight or visibility. Similarly, if the development team can say ‘no’ to the security team’s requests, they usually will – they are paid to ship new features, not to provide security. _So either security **is* important, or it isn’t._ To move past a compliance-only WAF, security folks need someone up the food chain – the CISO, the CIO, or even the CEO – to agree that the velocity of feature evolution must give some ground to address operational security. Once management has made that commitment, developers can justify improving security as part of their job. It is also possible – and in some organizational cultures advisable – to include some security into the application specification. This helps guarantee code does not ship until it meets minimum security requirements – either in the app or in the WAF. Establish Expectations: Here all parties learn what’s required and expected to get their jobs done with minimum fuss and maximum security. We suggest you arrange a sit-down with all stakeholders (operations, development, and security) to establish some guidelines on what really needs to happen, and what would be nice to have. Most developers want to know about broken links and critical bugs in the code, but they get surly when you send them thousands of changes via email, and downright pissed when all requests relate to the same non-critical issue. It’s essential to get agreement on what constitutes a critical issue and how critical issues will be addressed among the pile of competing critical requirements. Set guidelines in advance so there are no arguments when issues arise. Similarly, security people hate it when a new application enters production on a site they didn’t know existed, or significant changes to the network or application infrastructure break the WAF configuration. Technically speaking, each party removes work the other does not want to do, or does not have time to do, so position these discussions as mutually beneficial. A true win-win – or at least a reduction in aggravation and wasted time. Security/Developer Integration Points: The integration points define how the parties share data and solve problems together. Establish rules of engagement for how DevOps works with the WAF team, when they meet, and what automated tools will be used to facilitate communication. You might choose to invite security to development scrums, or a member of the development team could attend security meetings. You need to agree upon a communication medium that’s easy to use, establish a method for getting urgent requests addressed, and define a means of escalation for when they are not. Logical and documented notification processes need to be integrated in the application development lifecycle to ensure

Share:
Read Post

Endpoint Security Management Buyer’s Guide: Ongoing Controls—File Integrity Monitoring

After hitting on the first of the ongoing controls, device control, we now turn to File Integrity Monitoring (FIM). Also called change monitoring, this entails monitoring files to see if and when files change. This capability is important for endpoint security management. Here are a few scenarios where FIM is particularly useful: Malware detection: Malware does many bad things to your devices. It can load software, and change configurations and registry settings. But another common technique is to change system files. For instance, a compromised IP stack could be installed to direct all your traffic to a server in Eastern Europe, and you might never be the wiser. Unauthorized changes: These may not be malicious but can still cause serious problems. They can be caused by many things, including operational failure and bad patches, but ill intent is not necessary for exposure. PCI compliance: Requirement 11.5 in our favorite prescriptive regulatory mandate, the PCI-DSS, requires file integrity monitoring to alert personnel to unauthorized modification of critical system files, configuration files, or content files. So there you have it – you can justify the expenditure with the compliance hammer, but remember that security is about more than checking the compliance box, so we will focus on getting value from the investment as well. FIM Process Again we start with a process that can be used to implement file integrity monitoring. Technology controls for endpoint security management don’t work well without appropriate supporting processes. Set policy: Start by defining your policy, identifying which files on which devices need to be monitored. But there are tens of millions of files in your environment so you need to be pretty savvy to limit monitoring to the most sensitive files on the most sensitive devices. Baseline files: Then ensure the files you assess are in a known good state. This may involve evaluating version, creation and modification date, or any other file attribute to provide assurance that the file is legitimate. If you declare something malicious to be normal and allowed, things go downhill quickly. The good news is that FIM vendors have databases of these attributes for billions of known good and bad files, and that intelligence is a key part of their products. Monitor: Next you actually monitor usage of the files. This is easier said than done because you may see hundreds of file changes on a normal day. So knowing a good change from a bad change is essential. You need a way to minimize false positives from flagging legitimate changes to avoid wasting everyone’s time. Alert: When an unauthorized change is detected you need to let someone know. Report: FIM is required for PCI compliance, and you will likely use that budget to buy it. So you need to be able to substantiate effective use for your assessor. That means generating reports. Good times. Technology Considerations Now that you have the process in place, you need some technology to implement FIM. Here are some things to think about when looking at these tools: Device and application support: Obviously the first order of business is to make sure the vendor supports the devices and applications you need to protect. We will talk about this more under research and intelligence, below. Policy Granularity: You will want to make sure your product can support different policies by device. For example, a POS device in a store (within PCI scope) needs to have certain files under control, while an information kiosk on a segmented Internet-only network in your lobby may not need the same level of oversight. You will also want to be able to set up those policies based on groups of users and device types (locking down Windows XP tighter, for example, as it doesn’t newer protections in Windows 7). Small footprint agent: In order to implement FIM you will need an agent on each protected device. Of course there are different definitions of what an ‘agent’ is, and whether one needs to be persistent or it can be downloaded as needed to check the file system and then removed – a “dissolvable agent”. You will need sufficient platform support as well as some kind of tamper proofing of the agent. You don’t want an attacker to turn off or otherwise compromise the agent’s ability to monitor files – or even worse, to return tampered results. Frequency of monitoring: Related to the persistent vs. dissolvable agent question, you need to determine whether you require continuous monitoring of files or batch assessment is acceptable. Before you respond “Duh! Of course we want to monitor files at all times!” remember that to take full advantage of continuous monitoring, you must be able to respond immediately to every alert. Do you have 24/7 ops staff ready to pounce on every change notification? No? Then perhaps a batch process could work. Research & Intelligence: A large part of successful FIM is knowing a good change from a potentially bad change. That requires some kind of research and intelligence capability to do the legwork. The last thing you want your expensive and resource-constrained operations folks doing is assembling monthly lists of file changes for a patch cycle. Your vendor needs to do that. But it’s a bit more complicated, so here are some other notes on detecting bad file changes. Change detection algorithm: Is a change detected based on file hash, version, creation date, modification date, or privileges? Or all of the above? Understanding how the vendor determines a file has changed enables you to ensure all your threat models are factored in. Version control: Remember that even a legitimate file may not be the right one. Let’s say you are updating a system file, but an older legitimate version is installed. Is that a big deal? If the file is vulnerable to an attack it could be, so ensuring that versions are managed by integrating with patch information is also a must. Risk assessment: It’s also helpful if the vendor can assess different kinds of changes

Share:
Read Post

Friday Summary: August 17, 2012

Rich here… Some weeks I can’t decide if I should write something personal, professional, or technical in the Summary intro. Especially when I’m absolutely slammed and haven’t been blogging. This week I’ll err on the side of personal, and I’m sure you all will give me a little feedback if you prefer the geeky. Last summer and fall I had a bit of a health scare, when I thought I was, well, dying, at breakfast with Mr. Rothman. Now I know many people think they’ve felt that way while dining with Mike, but I seriously thought I was going out for the count. Many months and medical tests later, they think it was just something with my stomach, and beyond a more-than-fair share of indigestion, I have moved on with life. I have always tried to be a relatively self-aware individual (well, not counting most of my life before age 27 or so). Some people blow off scares like that, but I figured it was a good way to review my life priorities. I came up with a simple list: 1 Family/Health 2 Fitness 3 Work … … … … … … 4 Everything else Yes, in that order. I can’t be there for my family if I’m not healthy, and my wife and kids matter a lot more to me than anything else. I don’t begrudge people who prioritize differently – I know plenty of people who put work/career in front of family (although few admit it to themselves). That’s their decision to make, and some of them see financial health the way I look at physical health. That’s also why I put fitness ahead of work. For me, it is intrinsically tied to health, but the difference is that I will skip a workout if necessary to be there for my family. But aside from the health benefits (not counting the injuries), I want to be very active and very old someday, which I can’t do without working out. A lot. And it keeps me sane. Then comes work. As the CEO of a startup facing its most important year ever, work has to come before everything else. This Securosis thing isn’t just a paycheck – it is long-term financial security for my family. I can’t afford to screw it up. The upside of The List is that it makes decisions simple. Can I carve out 2 hours for a workout during business hours so I can be home with my family that night? Yes. Do I skip a Saturday morning ride because I have been traveling a lot and need to spend time with the kids? Yes. Can I travel for that conference during a birthday? No. The downside of my list is that “everything else” is a distant fourth to the top three items. That means less contact with friends; dropping most of my hobbies that don’t involve pools, bikes, or running shoes; and not having the diversity in my life that I enjoyed before my family. Other than the part about friends, I am okay with those sacrifices. When the kids are older I can start woodworking, recreational hacking, and playing with the soldering iron again. I don’t have much of a social life and I work at home, which is pretty isolating, and maybe I will figure that out as the kids get older. It also means I dropped nearly all my emergency services work, and for the first time since I was 18 I might drop it completely for a few years. And, in case you were wondering, The List is the reason I haven’t been writing as much. We are completing some seriously important long-term projects for the company and those need to take priority over the short stuff. But I like algorithms. Keeps things simple, especially when you need to make the hard choices. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Quiet. Must be summer. Favorite Securosis Posts Adrian Lane: Pragmatic WAF Management: Policy Management. I don’t normally pick my own posts but I like this one. And there was so much ground to cover I am afraid I might have left something out, so I would appreciate feedback. Mike Rothman: Pragmatic WAF Management: Policy Management. WAFs still ain’t easy, nor are they going to be. But understanding how to build and manage your policies is the first step. Adrian lays out what you need to know. Rich: Always Assume. This is an older post, but it came up in an internal discussion today and I think it is still very relevant. Other Securosis Posts Pragmatic WAF Management: Application Lifecycle Integration. Incite 8/15/2012: Fear (of the Unknown). Endpoint Security Management Buyer’s Guide: Ongoing Controls – Device Control. Endpoint Security Management Buyer’s Guide: Ongoing Controls – File Integrity Monitoring. Favorite Outside Posts Mike Rothman: Triple DDoS vs. krebsonsecurity. You know you have friends in high places when a one man operation is blasted by all sorts of bad folks. Krebs describes his battle against DDoS in this post and it’s illuminating. We will do some research on DDoS in the fall – we think this is an attack tactic you will need to pay much more attention to. Adrian Lane: Software Runs the World. I’m not certain we can say software is totally to blame for the Knight Capital issue, but this is a thought-provoking piece in the mainstream media. Although I am certain those of you who have read Daemon are unimpressed. Rich: Gunnar’s interview with Jason Chan of Netflix security. A gold mine for cloud security nuggets. Project Quant Posts Malware Analysis Quant: Index of Posts. Research Reports and Presentations Understanding and Selecting Data Masking Solutions Evolving Endpoint Malware Detection: Dealing with Advanced and Targeted Attacks. Implementing and Managing a Data Loss Prevention Solution. Defending Data on iOS. Malware Analysis Quant Report. Report: Understanding and Selecting a Database Security Platform. Vulnerability Management Evolution: From Tactical Scanner to Strategic Platform. Watching the Watchers: Guarding the Keys to the

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.