Securosis

Research

Understanding DLP Solutions, “DLP Light”, and DLP Features

I’m nearly done with a major revision to the very first whitepaper I published here at Securosis: Understanding and Selecting a Data Loss Prevention Solution, and one of the big additions is an expanded section talking about DLP integration and “DLP Light” solutions. Here is my draft of that content, and I wonder if I’m missing anything major: DLP Features and Integration with Other Security Products Up until now we have mostly focused on describing aspects of dedicated DLP solutions, but we also see increasing interest in DLP Light tools for four main use cases: Organizations who turn on the DLP feature of an existing security product, like an endpoint suite or IPS, to generally assess their data security issues. Users typically turn on a few general rules and use the results more to scope out their issues than to actively enforce policies. Organizations which only need basic protection on one or a few channels for limited data types, and want to bundle the DLP with existing tools if possible – often to save on costs. The most common examples are email filtering, endpoint storage monitoring, or content-based USB alerting/blocking for credit card numbers or customer PII. Organizations which want to dip their toes into DLP with plans for later expansion. They will usually turn on the DLP features of an existing security tool that is also integrated with a larger DLP solution. These are often provided by larger vendors which have acquired a DLP solution and integrated certain features into their existing product line. To address a very specific, and very narrow, compliance deficiency that a DLP Light feature can resolve. There are other examples, but these are the four cases we encounter most often. DLP Light tends to work best when protection scope and content analysis requirements are limited, and cost is a major concern. There is enough market diversity now that full DLP solutions available even for cost-conscious smaller organizations, so we suggest that if more-complete data protection is your goal, you take a look at the DLP solutions for small and mid-size organizations rather than assuming DLP Light is your only option. Although there are a myriad of options out there, we do see some consistencies between the various DLP Light offerings, as well as full-DLP integration with other existing tools. The next few paragraphs highlight the most common options in terms of features and architectures, including the places where full DLP solutions can integrate with existing infrastructure: Content Analysis and Workflow Most DLP Light tools start with some form of rules/pattern matching – usually regular expressions, often with some additional contextual analysis. This base feature covers everything from keywords to credit card numbers. Because most customers don’t want to build their own custom rules, the tools come with pre-built policies. The most common is to find credit card data for PCI compliance, since that drives a large portion of the market. We next tend to see PII detection, followed by healthcare/HIPAA data discovery; all of which are designed to meet clear compliance needs. The longer the tool/feature has been on the market, the more categories it tends to support, but few DLP light tools or features support the more advanced content analysis techniques we’ve described in this paper. This usually results in more false positives than a dedicated solution, but for some of these data types , like credit card numbers, even a false positive is something you usually want to take a look at. DLP Light tools or features also tend to be more limited in terms of workflow. They rarely provide dedicated workflow for DLP, and policy alerts are integrated into whatever existing console and workflow the tool uses for its primary function. This might not be an issue, but it’s definitely important to consider before making a final decision, as these constraints might impact your existing workflow and procedures for the given tool. Network Features and Integration DLP features are increasingly integrated into existing network security tools, especially email security gateways. The most common examples are: Email Security Gateways: These were the first non-DLP tools to include content analysis, and tend to offer the most policy/category coverage. Many of you already deploy some level of content-based email filtering. Email gateways are also one of the top integration points with full DLP solutions: all the policies and workflow are managed on the DLP side, but analysis and enforcement are integrated with the gateway directly rather than requiring a separate mail hop. Web Security Gateways: Some web gateways now directly enforce DLP policies on the content they proxy, such as preventing files with credit card numbers from being uploaded to webmail or social networking services. Web proxies are the second most common integration point for DLP solutions because, as we described in the Technical Architecture section [see the full paper, when released], they proxy web and FTP traffic and make a perfect filtering and enforcement point. These are also the tools you will use to reverse proxy SSL connections to monitor those encrypted communications, since that’s a critical capability these tools require to block inbound malicious content. Web gateways also provide valuable context, with some able to categorize URLs and web services to support policies that account for the web destination, not just the content and port/protocol. Unified Threat Management: UTMs provide broad network security coverage, including at least firewall and IPS capabilities, but usually also web filtering, an email security gateway, remote access, and web content filtering (antivirus). These are a natural location to add network DLP coverage. We don’t yet see many integrated with full DLP solutions, and they tend to build their own analysis capabilities (primarily for integration and performance reasons). Intrusion Detection and Prevention Systems: IDS/IPS tools already perform content inspection, and thus make a natural fit for additional DLP analysis. This is usually basic analysis integrated into existing policy sets, rather than a new, full content analysis engine. They are rarely integrated with a full DLP solution, although we do expect to see this

Share:
Read Post

Incite 9/29/2010: Reading Is Fundamental

For those of you with young kids, the best practice is to spend some time every day reading to them. so they learn to love books. When our kids were little, we dutifully did that, but once XX1 got proficient she would just read by herself. What did she need us for? She has inhaled hundreds of books, but none resonate like Harry Potter. She mowed through each Potter book in a matter of days, even the hefty ones at the end of the series. And she’s read each one multiple times. In fact, we had to remove the books from her room because she wasn’t reading anything else. The Boss went over to the book store a while back and tried to get a bunch of other books to pique XX1’s interest. She ended up getting the Percy Jackson series, but XX1 wasn’t interested. It wasn’t Harry Potter or even Captain Underpants, so no sale. Not wanting to see a book go unread, I proceeded to mow through it and really liked it. And I knew XX1 would like it too, if she only gave it a chance. So the Boss and I got a bit more aggressive. She was going to read Percy Jackson, even if we had to bribe her. So we did, and she still didn’t. It was time for drastic measures. I decided that we’d read the book together. The plan was that every night (that I was in town anyway), we would read a chapter of The Lightning Thief. That lasted for about three days. Not because I got sick of it, and not because she didn’t want to spend time with me. She’d just gotten into the book and then proceeded to inhale it. Which was fine by me because I already read it. We decided to tackle Book 2 in the series, the Sea of Monsters, together. We made it through three chapters, and then much to my chagrin she took the book to school and mowed through three more chapters. That was a problem because at this point, I was into the book as well. And I couldn’t have her way ahead of me – that wouldn’t work. So I mandated she could only read Percy Jackson with me. Yes, I’m a mean Dad. For the past few weeks, every night we would mow through a chapter or two. We finished the second book last night. I do the reading, she asks some questions, and then at the end of the chapter we chat a bit. About my day, about her day, about whatever’s on her mind. Sitting with her is a bit like a KGB interview, without the spotlight in my face. She’s got a million questions. Like what classes I took in college and why I lived in the fraternity house. There’s a reason XX1 was named “most inquisitive” in kindergarten. I really treasure my reading time with her. It’s great to be able to stop and just read. We focus on the adventures of Percy, not on all the crap I didn’t get done that day or how she dealt with the mean girl on the playground. Until we started actually talking, I didn’t realize how much I was missing by just swooping in right before bedtime, doing our prayer and then moving on to the next thing on my list. I’m excited to start reading the next book in the series, and then something after that. At some point, I’m sure she’ll want to be IM’ing with her friends or catching up on homework as opposed to reading with me. But until then, I’ll take it. It’s become one of the best half hours of my day. Reading is clearly fundamental for kids, but there’s something to be said for its impact on parents too. – Mike Photo credits: “Parenting: Ready, Set, Go!” originally uploaded by Micah Taylor Recent Securosis Posts The Securosis 2010 Data Security Survey Report Rates the Top 5 Data Security Controls Attend the Securosis/SearchSecurity Data Security Event on October 26 Proposed Internet Wiretapping Law Fundamentally Incompatible with Security Government Pipe Dreams Friday Summary: September 24, 2010 Monitoring up the Stack: File Integrity Monitoring DAM, Part 1 NSO Quant Posts NSO Quant: Clarifying Metrics NSO Quant: Manage Metrics – Signature Management NSO Quant: Manage Metrics – Process Change Request and Test/Approve NSO Quant: Manage Metrics – Deploy and Audit/Validate NSO Quant: Manage Metrics – Monitor Issues/Tune IDS/IPS NSO Quant: Health Metrics – Device Health LiquidMatrix Security Briefing: September 24 Incite 4 U Stuxnet comes from deep pockets – I know it’s shocking, but we are getting more information about Stuxnet. Not just on the technical side, like this post by Gary McGraw on how it actually works. Clearly it’s targeting control systems and uses some pretty innovative tactics. So the conclusion emerging is that some kind of well-funded entity must be behind it. Let me award the “Inspector Clouseau” award for obvious conclusions. But I’m not sure it really matters who is behind the attack. We may as well blame the Chinese, since we blame them for everything. It really could have been anyone. Though it’s hard for me to see the benefit to a private enterprise or rich mogul of funding an effort like that. Of course we all have our speculations, but in the end let’s just accept that when there is a will there is a way for the attackers to break your stuff. And they will. – MR Are breaches declining? – One of the most surprising results in our big data security survey is that more people report breaches declining than increasing. 46% of you told us your breaches are about the same this year over last, with 12% reporting a few more or many more, and 27% reporting a few less or many less. Rsnake noticed the same trend in the DataLossDB, and is a bit skeptical. While I know not all breaches are reported (in violation of various regulations), I think a few factors are at play. I do think

Share:
Read Post

A Wee Bit on DLP SaaS

Here’s some more content that’s going into the updated version of Understanding and Selecting a Data Loss Prevention Solution (hopefully out next week). Every now and then I get questions on DLP SaaS, so here’s what I’m seeing now… DLP Software as a Service (SaaS) Although there aren’t currently any completely SaaS-based DLP services available – due to the massive internal integration requirements for network, endpoint, and storage coverage – some early SaaS offerings are available for limited DLP deployments. Due to the ongoing interest in cloud and SaaS in general, we also expect to see new options appear on a regular basis. Current DLP SaaS offerings fall into the following categories: DLP for email: Many organizations are opting for SaaS-based email security, rather than installing internal gateways (or a combination of the two). This is clearly a valuable and straightforward integration point for monitoring outbound email. Most services don’t yet include full DLP analysis capabilities, but since many major email security service providers have also acquired DLP solutions (sometimes before buying the email SaaS provider) we expect integration to expand. Ideally, if you obtain your full DLP solution from the same vendor providing your email security SaaS, the policies and violations will synchronize from the cloud to your local management server. Content Discovery: While still fairly new to the market, it’s possible to install an endpoint (or server, usually limited to Windows) agent that scans locally and reports to a cloud-based DLP service. This targets smaller to mid-size organizations that don’t want the overhead of a full DLP solution, and don’t have very deep needs. DLP for web filtering: Like email, we see organizations adopting cloud-based web content filtering, to block web based attacks before they hit the local network and to better support remote users and locations. Since all the content is already being scanned, this is a nice fit for potential DLP SaaS. With the same acquisition trends as in email services, we also hope to see integrated policy management and workflow for organizations obtaining their DLP web filtering from the same SaaS provider that supplies their on-premise DLP solution. There are definitely other opportunities for DLP SaaS, and we expect to see other options develop over the next few years. But before jumping in with a SaaS provider, keep in mind that they won’t be merely assessing and stopping external threats, but scanning for extremely sensitive content and policy violations. This may limit most DLP SaaS to focusing on common low hanging fruit, like those ubiquitous credit card numbers and customer PII, as opposed to sensitive engineering plans or large customer databases. Share:

Share:
Read Post

Monitoring up the Stack: DAM, part 2

The odds are, if you already have a SIEM/Log Management platform in place, you already look at some database audit logs. So why would you consider DAM in addition? The real question when thinking about how far up the stack (and where) to go with your monitoring strategy, is whether adding database activity monitoring data will help with threat detection and other security efforts. To answer that question, consider that DAM collects important events which are not in log files, provides real-time analysis and detection of database attacks, and blocks dangerous queries from reaching the database. These three features together are greater than the sum of their parts. As we discussed in part 1 on Database Activity Monitoring, database audit logs lack critical information (e.g., SQL statements), events (e.g., system activity) and query results needed for forensic analysis. DAM focuses on event collection into areas SIEM/Log Management does not venture: parsing database memory, collecting OS and/or protocol traffic, intercepting database library calls, undocumented vendor APIs, and stored procedures & triggers. Each source contains important data which would otherwise be unavailable. But the value is in turning this extra data into actionable information. Over and above attribute analysis (who, what, where, and when) that SIEM uses to analyze events, DAM uses lexical, behavioral, and content analysis techniques. By examining the components of a SQL statement, such as the where and from clauses, and the type and number of parameters, SQL injection and buffer overflow attacks can be detected. By capturing normal behavior patterns by user and group, DAM effectively detects system misuse and account hijacking. By examining content – as it is both stored and retrieved – injection of code or leakage of credit card numbers can be detected as it occurs. Once you have these two capabilities, blocking is possible. If you need to block unwanted or malicious events, you need to react in real time, and to deploy the technology in such a way that it can stop the query from being executed. Typical SIEM/LM deployments are designed to efficiently analyze events, which means only after data has been aggregated, normalized, and correlated. This is too late to stop an attack from taking place. By detecting threats before they hit the database, you have the capacity to block or quarantine the activity, and take corrective action. DAM, deployed in line with the database server, can block or provide ‘virtual database patching’ against known threats. Those are the reasons to consider augmenting SIEM and Log Management with Database Activity Monitoring. How do you get there? What needs to be done to include DAM technology within your SIEM deployment? There are two options: leverage a standalone DAM product to submit alerts and events, or select a SIEM/Log Management platform that embeds these feature. All the standalone DAM products have the capability to feed the collected events to third party SIEM and Log Management tools. Some can normalize events so that SQL queries can be aggregated and correlated with other network events. In some cases they can also send alerts as well, either directly or by posting them to syslog. Fully integrated systems take this a step further by linking multiple SQL operations together into logical transactions, enriching the logs with event data, or performing subsequent query analysis. They embed the analysis engine and behavioral profiling tools – allowing for tighter policy integration, reporting, and management. In the past, most database activity monitoring within SIEM products was ‘DAM Light’ – monitoring only network traffic or standard audit logs, and performing very little analysis. Today full-featured options are available within SIEM and Log Management platforms. To restate, DAM products offer much more granular inspection of database events that SIEM because DAM includes many more options for data collection, and database-specific analysis techniques. The degree to which you extract useful information depends on whether they are fully integrated with SIEM, and how much analysis and event sharing are established. If your requirement is to protect the database, you should consider this technology. Share:

Share:
Read Post

Monitoring up the Stack: DAM, Part 1

Database Activity Monitoring (DAM) is a form of application monitoring by looking at the database specific transactions, and integration of DAM data into SIEM and Log Management platforms is becoming more prevalent. Regular readers of this blog know that we have covered this topic many times, and gone into gory technical detail in order to help differentiate between products. If you need that level of detail, I’ll refer you to the database security page in the Securosis Research Library. Here I will give the “cliff notes” version, describing what the technology is and some of the problems it solves. The idea is to explain how DAM augments SIEM and Log Management analysis, and outfit end users with an understanding of how DAM extends the analysis capabilities of your monitoring strategy. So what is Database Activity Monitoring? It’s a system that captures and records database events – which at a minimum is all Structured Query Language (SQL) activity, in real-time or near-real-time, including database administrator activity, across multiple database platforms, and generating alerts on policy violations. That’s Rich’s definition from four years ago, and it still captures the essence. For those of you already familiar with SIEM, DAM is very similar in many ways. Both follow a similar process of collecting, aggregating, and analyzing data. Both provide alerts and reports, and integrate into workflow systems to leverage the analysis. Both collect different data types, in different formats, from heterogenous systems. And both rely on correlation (and in some cases enrichment) to perform advanced analytics. How are they different? The simple answer is that they collect different events and perform different analyses. But there is another significant difference, which I stressed within this series’ introductory post: context. Database Activity Monitoring is tightly focused on database activity and how applications use the database (for good and not so good purposes). With specific knowledge of appropriate database use and operations and a complete picture of database events, DAM is able to analyze database statements with far greater effectiveness. In a nutshell, DAM provides focused monitoring of one single important resource in the application chain, while SIEM provides great breadth of analysis across all devices. Why is this important? SQL injection protection: Database activity monitoring can filter and protect against many SQL injection variants. It cannot provide complete prevention, but statement and behavioral analysis techniques catch many known and unknown database attacks. By white listing specific queries from specific applications, DAM can detect tampered and other malicious queries, as well as queries from unapproved applications (which usually doesn’t bode well). And DAM can transcend monitoring and actually block a SQL injection before the statement arrives at the database. Behavioral monitoring: DAM systems capture and record activity profiles, both of generic user accounts, as well as, specific database users. Changes in a specific user’s behavior might indicate disgruntled employees, hijacked accounts, or even oversubscribed permissions. Compliance purposes: Given DAM’s complete view of database activity, and ability to enforce policies on both a statement and transaction/session basis, it’s a proven source to substantiate controls for regulatory requirements like Sarbanes-Oxley. DAM can verify the controls are both in place and effective. Content monitoring: A couple of the DAM offerings additionally inspect content, so they are able to detect both SQL injection — as mentioned above – and also content injection. It’s common for attackers to abuse social networking and file/photo sharing sites to store malware. When ‘friends’ view images or files, their machines become infected. By analyzing the ‘blob’ of content prior to storage, DAM can prevent some ‘drive-by’ injection attacks. That should provide enough of an overview to start to think about if/how you should think about adding DAM to your monitoring strategy. In order to get there, next we’ll dig into the data sources and analysis techniques used by DAM solutions, so you can determine whether the technology would enhance your ability to detect threats, while increasing leverage. Share:

Share:
Read Post

Proposed Internet Wiretapping Law Fundamentally Incompatible with Security

It’s been a while since I waded in on one of these government-related privacy thingies, but a report this morning from the New York Times reveals yet another profound, and fundamental, misunderstanding of how technology and security function. The executive branch is currently crafting a legislative proposal to require Internet-based communications providers to support wiretap capabilities in their products. I support law enforcement’s capability to perform lawful intercepts (with proper court orders), but requirements to alter these technologies to make interception easier will result in unintended consequences on both technical and international political levels. According to the article, the proposal has three likely requirements: Communications services that encrypt messages must have a way to unscramble them. Foreign providers that do business inside the United States must establish a domestic office capable of performing intercepts. Developers of software that enables peer-to-peer communication must redesign their services to allow interception. Here’s why those are all a bad ideas: To allow a communications service to decrypt messages, they will need an alternative decryption key (master key). This means that anyone with access to that key has access to the communications. No matter how well the system is architected, this provides a single point of security failure within organizations and companies that don’t have the best security track record to begin with. That’s not FUD – it’s hard technical reality. Requiring foreign providers to have interception offices in the US is more of a political than technical issue. Because once we require it, foreign companies will reciprocate and require the same for US providers. Want to create a new Internet communications startup? Better hope you get millions in funding before it becomes popular enough for people in other countries to use it. And that you never need to correspond with a foreigner whose government is interested in their actions. There are only 3 ways to enable interception in peer to peer systems: network mirroring, full redirection, or local mirroring with remote retrieval. Either you copy all communications to a central monitoring console (which either the provider or law enforcement could run), route all traffic through a central server, or log everything on the local system and provide law enforcement a means of retrieving it. Each option creates new opportunities for security failures, and is also likely to be detectable with some fairly basic techniques – thus creating the Internet equivalent of strange clicks on the phone lines, never mind killing the bad guys’ bandwidth caps. Finally, the policymakers need to keep in mind that once these capabilities are required, they are available to any foreign governments – including all those pesky oppressive ones that don’t otherwise have the ability to compel US companies to change their products. Certain law enforcement officials are positioning this as restoring their existing legal capability for intercept. But that statement isn’t completely correct – what they are seeking isn’t a restoration of the capability to intercept, but creation of easier methods of intercept through back doors hard-coded into every communications system deployed on the Internet in the US. (I’d call it One-Click Intercept, but I think Amazon has a patent on that.) I don’t have a problem with law enforcement sniffing bad guys with a valid court order. But I have serious a problem with the fundamental security of my business tools being deliberately compromised to make their jobs easier. The last quote in the article really makes the case: “No one should be promising their customers that they will thumb their nose at a U.S. court order,” Ms. Caproni said. “They can promise strong encryption. They just need to figure out how they can provide us plain text.” Yeah. That’ll work. Share:

Share:
Read Post

Attend the Securosis/SearchSecurity Data Security Event on Oct 26

We may not run our own events, but we managed to trick the folks at Information Security Magazine/SearchSecurity into letting us take over the content at the Insider Data Threats seminar in San Francisco. The reason this is so cool is that it allowed us to plan out an entire day of data-protection goodness with a series of interlocked presentations that build directly on each other. Instead of a random collection from different presenters on different topics, all our sessions build together to provide deep actionable advice. And did I mention it’s free? Mike Rothman and I will be delivering all the content, and here’s the day’s structure: Involuntary Case Studies in Data Security: We dig into the headlines and show you how real breaches happen, using real names. Introduction to Pragmatic Data Security: This session lays the foundation for the rest of the day by introducing the Pragmatic Data Security process and the major management and technology components you’ll use to protect your organization’s information. Network and Endpoint Security for Data Protection: We’ll focus on the top recommendations for using network and endpoint security to secure the data, not just… um… networks and endpoints. Quick Wins with Data Loss Prevention, Encryption, and Tokenization: This session shows the best ways to derive immediate value from three of the hottest data protection technologies out there. Building Your Data Security Program: In our penultimate session we tie all the pieces together and show you how to take a programatic approach, rather than merely buying and implementing a bunch of disconnected pieces of technology. Stump the Analysts: We’ll close the day with a free-for-all battle royale. Otherwise known as “an extended Q&A session”. There’s no charge for the event if you qualify to attend – only a couple short sponsor sessions and a sponsors area. Our sessions target the management level, but in some places we will dig deep into key technology issues. Overall this is a bit of experiment for both us and SearchSecurity, so please sign up and we’ll see you in SF! Share:

Share:
Read Post

NSO Quant: The End is Near!

As mentioned last week, we’ve pulled the NSO Quant posts out of the main feed because the volume was too heavy. So I have been doing some cross-linking to let you who don’t follow that feed know when new stuff appears over there. Well, at long last, I have finished all the metrics posts. The final post is … (drum roll, please): NSO Quant: Health Metrics – Device Health I’ve also put together a comprehensive index post, basically because I needed a single location to find all the work that went into the NSO Quant process. Check it out, it’s actually kind of scary to see how much work went into this series. 47 posts. Oy! Finally, I’m in the process of assembling the final NSO Quant report, and that means I’m analyzing the survey data right now. If you want to have a chance at the iPad, you’ll need to fill out the survey (you must complete the entire survey to be eligible), by tomorrow at 5pm ET. We’ll keep the survey open beyond that, but the iPad will be gone. Given the size of the main document – 60+ pages – I will likely split out the actual metrics model into a stand-alone spreadsheet, so that and the final report should be posted within two weeks. Share:

Share:
Read Post

Friday Summary: September 24, 2010

We are wrapping up a pretty difficult summer here at Securosis. You have probably noticed from the blog volume as we have been swamped with research projects. Rich, Mike, and I have barely spoken with one another over the last couple months as we are head-down and researching and writing as fast as we can. No time for movies, parties, or vacation travel. These Quant projects we have been working on make us feel like we have been buried in sand. I have been this busy several times during my career, but I can’t say I have ever been busier. I don’t think that would be possible, as there are not enough hours in the day! Mike’s been hiding at undisclosed coffee shops to the point his family had his face put on a milk carton. Rich has taken multitasking to a new level by blogging in the shower with his iPad. Me? I hope to see the shower before the end of the month. I must say, despite the workload, projects like Tokenization and PCI Encryption have been fun. There is light at the end of the proverbial tunnel, and we will even start taking briefings again in a couple weeks. But what really keeps me going is having work to do. If I even think about complaining about the work level, something in the back of my brain reminds me that it is very good to be busy. It beats the alternative. By the time this post goes live I will be taking part of the day off from working to help friends load all their personal belongings into a truck. After 26 years with the same employer, one of my friends here in Phoenix was laid off. He and his wife, like many of the people I know in Arizona, are losing their home. 22 years of accumulated stuff to pack … whatever is left from the various garage sales and give-aways. This will be the second friend I have helped move in the last year, and I expect it will happen a couple more times before this economic depression ends. But as depressing as that may sound, after 14 months of haggling with the bank, I think they are just relieved to be done with it and move on. They now have a sense of relief from the pressure and in some ways are looking forward to the next phase of their life. And the possibility of employment. Spirits are high enough that we’ll actually throw a little party and celebrate what’s to come. Here’s to being busy! On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Seven Features To Look For In Database Assessment Tools. Mike’s presentation on Endpoint Security Fundamentals. Adrian’s Dark Reading post: Protegrity Gets Aggressive. Adrian quoted in TechTarget. And I’ll probably catch hell for this. Favorite Securosis Posts Rich: Monitoring up the Stack: Threats. Knowing what to monitor, and how to pull the value from it, is a heck of a lot tougher than merely collecting data. Mike and Adrian are digging in and showing us how to focus. Mike Rothman: Monitoring up the Stack: Threats. This blog series is getting going and it’s going to be cool. Getting visibility beyond just the network/systems is critical. David Mortman: Monitoring up the Stack: Threats. Adrian Lane: FireStarter: It’s Time to Talk about APT. Other Securosis Posts Government Pipe Dreams. NSO Quant: Clarifying Metrics (and some more links). Monitoring up the Stack: File Integrity Monitoring. Incite 9/22/2010: The Place That Time Forgot. New Paper (+ Webcast): Understanding and Selecting a Tokenization Solution. NSO Quant: Manage Process Metrics, Part 1. Understanding and Selecting an Enterprise Firewall: Selection Process. Upcoming Webinar: Selecting SIEM. Favorite Outside Posts Rich: 2010 Website Security Statistics Report. Once again, Jeremiah provides some absolutely amazing numbers on the state of Web site security. He pulled together stats from over 2000 web sites across 350 organizations to provide us all some excellent benchmarks for things like numbers and types of vulnerabilities (by vertical) and time to remediate. Truly excellent, and non-biased, work. Mike Rothman: Do you actually care about privacy?. Lots of us say we do. Seth Godin figures we are more worried about being surprised. It makes you think. Chris Pepper: evercookie: doggedly persistent cookies. By the guy who XSSed MySpace! David Mortman: Cyber Weapons. Adrian Lane: Titanic Secret Revealed. A serious case of focusing on the wrong threat! Chris Pepper: Little Bobby Tables moves to Sweden. Project Quant Posts NSO Quant: Manage Metrics–Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics–Deploy and Audit/Validate. NSO Quant: Manage Metrics–Process Change Request and Test/Approve. Research Reports and Presentations Understanding and Selecting a Tokenization Solution. Security + Agile = FAIL Presentation. Data Encryption 101: A Pragmatic Approach to PCI. White Paper: Understanding and Selecting SIEM/Log Management. White Paper: Endpoint Security Fundamentals. Understanding and Selecting a Database Encryption or Tokenization Solution. Low Hanging Fruit: Quick Wins with Data Loss Prevention. Top News and Posts Twitter Worm Outbreak. The most interesting security event of the week. New Security Microchip Vuln. Mac OS (iOS and OSX) Security Updates. New Autofill Hack Variant. VMWare Security Hardening Guide (PDF). evercookie. Many of you probably saw the re-tweet stream this week. Yes, this looks nasty and a pain in the ass to remove. Maybe I need to move all my browsing to temporary partitions. More Conjecture on Stuxnet Malware and some alternate opinions. And some funny quotes on Schneier’s blog. My relentless pursuit of the guy who robbed me. Cranky amateur cyber-sleuth FTW! DRG SSH Username and Password Authentication Tag Clouds. Nice rendering of human nature (you can call it laziness or stupidity, as you prefer). Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to ds, in response to FireStarter: It’s Time to Talk about APT . I think you are oversimplifying the situation regarding te reaons for classifying information. It is well known that information has value, and sometimes that value diminishes if others are aware you know it. Consider the historical case of the Japanese codes in WWII. If the

Share:
Read Post

NSO Quant: Clarifying Metrics (and some more links)

We had a great comment by Dan on one of the metrics posts, and it merits an answer with explanation, because in the barrage of posts the intended audience can certainly get lost. Here is Dan’s comment: Who is the intended audience for these metrics? Kind of see this as part of the job, and not sure what the value is. To me the metrics that are critical around process are do the amount of changes align with the number of authorized requests. Do the configurations adhere to current policy requirements, etc… Just thinking about presenting to the CIO that I spent 3 hours getting consensus, 2 hours on prioritizing and not seeing how that gets much traction. One of the pillars of my philosophy on metrics is that there are really three sets of metrics that network security teams need to worry about. The first is what Dan is talking about, and that’s the stuff you need to substantiate what you are doing for audit purposes. Those are key issues and things that you have to be able to prove. The second bucket is numbers that are important to senior management. That tends to focus around incidents and spending. Basically how many incidents happen, how is that trending and how long does it take to deal with each situation. On the spending side, senior folks want to know about % of spend relative to IT spend, relative to total revenues, as well as how that compares to peers. Then there is the third bucket, which are the operational metrics that we use to improve and streamline our processes. It’s the old saw about how you can’t manage what you don’t measure – well, the metrics defined within NSO Quant represent pretty much everything we can measure. That doesn’t mean you should measure everything, but the idea of this project is to really decompose the processes as much as possible to provide a basis for measurement. Again, not all companies do all the process steps. Actually most companies don’t do much from a process standpoint – besides fight fires all day. Gathering this kind of data requires a significant amount of effort and will not be for everyone. But if you are trying to understand operationally how much time you spend on things, and then use that data to trend and improve your operations, you can get payback. Or if you want to use the metrics to determine whether it even makes sense for you to be performing these functions (as opposed to outsourcing), then you need to gather the data. But clearly the CIO and other C-level folks aren’t going to be overly interested in the amount of time it takes you to monitor sources for IDS/IPS signature updates. They care about outcomes, and most of the time you spend with them needs to be focused on getting buy-in and updating status on commitments you’ve already made. Hopefully that clarifies things a bit. Now that I’m off the soapbox, let me point to a few more NSO Quant metrics posts that went up over the past few days. We’re at the end of the process, so there are two more posts I’ll link to Monday, and then we’ll be packaging up the research into a pretty and comprehensive document. NSO Quant: Manage Metrics – Signature Management NSO Quant: Manage Metrics – Process Change Request and Test/Approve NSO Quant: Manage Metrics – Deploy and Audit/Validate NSO Quant: Manage Metrics – Monitor Issues/Tune IDS/IPS Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.