Securosis

Research

Quick Wins with DLP Light: The Process

The objective of the Quick Wins process is to get results and show value as quickly as possible, while setting yourself up for long-term success. Quick Wins for DLP Light is related to the Quick Wins for DLP process, but heavily modified to deal both with the technical differences and the different organizational goals we see in DLP Light projects. Keep this process in perspective – many of you will already be pretty far down your DLP Light path and might not need all these steps. Take what you need and ignore the rest. Prepare There are two preparatory steps before kicking off the project: Establish Your Process Nearly every DLP customer we talk with discovers actionable offenses committed by employees as soon as they turn the tool on. Some of these require little more than contacting a business unit to change a bad process, but quite a few result in security guards escorting people out of the building, or even legal action. Even if you aren’t planning on moving straight to enforcement mode, you need a process in place to manage the issues that will crop up once you activate your tool. You should set up two different processes to handle the three common incident categories: Business Process Failures: DLP violations often result from poor business processes, such as retaining sensitive customer data and emailing unencrypted healthcare information to insurance providers. This process is about working with the business unit to fix the problem. Employee Violations: These are often accidental, but most DLP deployments result in identification of some malicious activity. Your process should focus on education to avoid future accidents; as well as working with business unit managers, HR, and legal to handle malicious activity. Security Incidents: Traditional security incidents, usually from an external source, which require response and investigation. Determine Existing DLP Capabilities The next step is to determine which DLP Light capabilities you have in-house, even if the project is driven by a particular tool. You might find you already have more capability than you realize. Check for existing DLP features in the main technology areas covered in our last post. It’s also worth reviewing whether you are current on product versions, as DLP features might be cheap or even free if you upgrade (discounting upgrade costs, of course). Build a list of the DLP Light tools and features you have available, with the following information: The tool/feature Where it’s deployed Protected “channels”: Network protocols, storage locations, endpoints, etc. Content analysis capabilities/categories Workflow capabilities: DLP-specific vs. general-purpose; ability to integrate with SIEM and other management tools This shouldn’t take long and will help you choose the best path for implementation. Determine Objective The next step is to determine your goal. Are you more concerned with protecting a specific type of data? Or do you want to look more broadly at overall information usage? While the full-DLP Quick Wins process is always focused on information gathering vs. enforcement, this isn’t necessarily the case in a DLP Light project. No matter you specific motivation, we find that individual projects then sift into three main categories: Focused Monitoring: The goal is to track usage of, and generate alerts on, a specific kind of information. This is most often credit card numbers, healthcare data, or other personally identifiable information. Focused Enforcement: You concentrate on the same limited data types as above, but instead of merely alerting you plan to enforce policies and block activity. General Information Gathering: Rather than focusing on a single type of data, you use tools to get a better sense of information usage throughout the organization. You turn on as many policies to monitor information of interest as possible. Choose Deployment Type This is a three-step process for making the final decisions required to deploy: Map desired coverage channels: Determine where you want to monitor and/or enforce – email, endpoints (USB), etc. List every place you want to cover vs. what you know you already can cover with your existing capabilities. This also needs to map to your objective, and content analysis requirements. Match desired to existing coverage: Now figure out what you have and where the gaps are. Fill the gaps: Obtain any additional products or licenses so that your project can meet your objectives. Your entire project might be as simple as, “we want to catch credit card numbers in email using our existing tool”, in which case this entire process up to now probably took about 10 seconds. But if you need a little more guidance, this will help. Implement and Monitor Now it’s time to integrate the product (if needed), turn it on, and collect results. The steps are: Select content analysis policies: For a focused deployment, this will only include the policy that targets the specific data you want to protect, although if you use multiple products that aren’t integrated you will use the most appropriate policies in each tool. For a general deployment you turn on every policy of interest (without wrecking performance – check with your vendor). Install (if needed) Integrate with other tools/workflow: If you need to integrate multiple components, or with a central workflow or incident management tool, do that now. Turn on monitoring We have a few hints to improve your chance of success: Don’t enable enforcement yet – even if enforcement is your immediate goal, start with monitoring. Understand how the tool will can impact workflow first, as we will discuss next. Don’t try to handle every incident at first. You will likely need to tune policies and educate users over time before you have the capacity to handle every incident – depending on your focus. Handle the most egregious events now and accept that you will handle the rest later. Leverage user education. Users often don’t know they are violating policies. One excellent way to reduce your incident volume is to send them automated notifications based on policy violations. This has the added advantage of helping you identify the egregious violators later on. Analyze At this point you have focused your project, picked your tools, set your policies, and started monitoring. Now it’s

Share:
Read Post

Fool us once… EMC/RSA Buys NetWitness

To no one’s surprise (after NetworkWorld spilled the beans two weeks ago), RSA/EMC formalized its acquisition of NetWitness. I guess they don’t want to get fooled again the next time an APT comes to visit. Kidding aside, we have long been big fans of full packet capture, and believe it’s a critical technology moving forward. On that basis alone, this deal looks good for RSA/EMC. Deal Rationale APT, of course. Isn’t that the rationale for everything nowadays? Yes, that’s a bit tongue in cheek (okay, a lot) but for a long time we have been saying that you can’t stop a determined attacker, so you need to focus on reacting faster and better. The reality remains that the faster you figure out what happened and remediate (as much as you can), the more effectively you contain the damage. NetWitness gear helps organizations do that. We should also tip our collective hats to Amit Yoran and the rest of the NetWitness team for a big economic win, though we don’t know for sure how big a win. NetWitness was early into this market and did pretty much all the heavy lifting to establish the need, stand up an enterprise class solution, and show the value within a real attack context. They also showed that having a llama at a conference party can work for lead generation. We can’t minimize the effect that will have on trade shows moving forward. So how does this help EMC/RSA? First of all, full packet capture solves a serious problem for obvious targets of determined attackers. Regardless of whether the attack was a targeted phish/Adobe 0-day or Stuxnet type, you need to be able to figure out what happened, and having the actual network traffic helps the forensics guys put the pieces together. Large enterprises and governments have figured this out and we expect them to buy more of this gear this year than last. Probably a lot more. So EMC/RSA is buying into a rapidly growing market early. But that’s not all. There is a decent amount of synergy with the rest of RSA’s security management offerings. Though you may hear some SIEM vendors pounding their chests as a result of this deal, NetWitness is not SIEM. Full packet capture may do some of the same things (including alert on possible attacks), but it analysis is based on what’s in the network traffic – not logs and events. More to the point, the technologies are complimentary – most customers pump NetWitness alerts into a SIEM for deeper correlation with other data sources. Additionally some of NetWitness’ new visualization and malware analysis capabilities supplement the analysis you can do with SIEM. Not coincidentally, this is how RSA positioned the deal in the release, with NetWitness and EnVision data being sent over to Archer for GRC (whatever that means). Speaking of EnVision, this deal may take some of the pressure off that debacle. Customers now have a new shiny object to look at, while maybe focusing a little less on moving off the RSA log aggregation platform. It’s no secret that RSA is working on the next generation of the technology, and being able to offer NetWitness to unhappy EnVision customers may stop the bleeding until the next version ships. A side benefit is that the sheer amount of network traffic to store will drive some back-end storage sales as well. For now, NetWitness is a stand-alone platform. But it wouldn’t be too much of a stretch to see some storage/archival integration with EMC products. EMC wouldn’t buy technology like NetWitness just to drive more storage demand, but it won’t hurt. Too Little, Too Late (to Stop the Breach) Lots of folks drew the wrong conclusion, that RSA bought NetWitness because of their recent breach. But these deals doesn’t happen overnight, so this acquisition has been in the works for quite a while. But what could better justify buying a technology than helping to detect a major breach? I’m sure EMC is pretty happy to control that technology. The trolls and haters focus on the fact that the breach still happened, so the technology couldn’t work that well, right? Actually, the biggest issue is that EMC didn’t have enough NetWitness throughout their environment. They might have caught the breach earlier if they had the technology more widely deployed. Then again, maybe not, because you never know how effective any control will be at any given time against any particular attack, but EMC/RSA can definitely make the case that they could have reacted faster if they had NetWitness everywhere. And now they likely will. Competitive Impact The full packet capture market is still very young. There are only a handful of direct competitors to NetWitness, all of whom should see their valuations skyrocket as a result of this deal. Folks like Solera Networks are likely grinning from ear to ear today. We also expect a number of folks in adjacent businesses (such as SIEM) to start dipping their toes into this water. Speaking of SIEM, NetWitness did have partnerships with the major SIEM providers to send them data, and this deal is unlikely to change much in the short term. But we expect to see a lot more integration down the road between NetWitness, EnVision Next, and Archer, which could create a competitive wedge for RSA/EMC in large enterprises. So we expect the big SIEM players to either buy or build this capability over the next 18 months to keep pace. Not that they aren’t all over the APT marketing already. Bottom Line This is a good deal for RSA/EMC – acquiring NetWitness provides a strong, differentiated technology in what we believe will be an important emerging market. But with RSA’s mixed results in leveraging acquired technology, it’s not clear that they will remain the leader in two years. But if they provide some level of real integration in that timeframe, they will have a very compelling set of products for security/compliance management. This is also a good

Share:
Read Post

Quick Wins with DLP Light: Technologies and Architectures

DLP Light tools cover a wide range of technologies, architectures, and integration points. We can’t highlight them all, so here are the core features and common architectures. We have organized them by key features and deployment location (network, endpoint, etc.): Content Analysis and Workflow Content analysis support is the single defining element for Data Loss Prevention – “Light” or otherwise. Without content analysis we don’t consider a tool or feature DLP, even if it helps to “prevent data loss”. Most DLP Light tools start with some form of rule/pattern matching – usually regular expressions, often with some additional contextual analysis. This base feature covers everything from keywords to credit card numbers. Most customers don’t want to build their own rules, so the tools come with pre-built policies, which are sometimes updated as part of a maintenance contract or license renewal. The most common policies identify credit card data for PCI compliance, because that drives a large portion of the market. We also see plenty of PII detection, followed by healthcare/HIPAA data discovery – both to meet clear compliance requirements. DLP Light tools and features may or may not have their own workflow engine and user interface for managing incidents. Most don’t provide dedicated workflow for DLP, instead integrating policy alerts into whatever existing console and workflow the tool uses for its primary function. This isn’t necessarily better or worse – it depends on your requirements. Network Features and Integration DLP features are increasingly integrated into existing network security tools, especially email security gateways. The most common examples are: Email Security Gateways: These were the first non-DLP tools to include content analysis, and tend to offer the broadest policy/category coverage. Many of you already deploy some level of content-based email filtering. Email gateways are also one of the main integration points with full DLP solutions: all the policies and workflow are managed on the DLP side, but analysis and enforcement are integrated with the gateway directly rather than requiring a separate mail hop. Depending on your specific tool, internal email may or may not be covered. Web Security Gateways: Some web gateways now directly enforce DLP policies on the content they proxy, for example preventing files with credit card numbers from being uploaded to webmail and social networking services. Web proxies are the second most common integration point for DLP solutions because, as we described in the Technical Architecture section, they proxy web and FTP traffic and make a perfect filtering and enforcement point. These are also the tools you will use to reverse proxy SSL connections to monitor those encrypted communications, which is a necessity for scanning and blocking inbound malicious content. Unified Threat Management: UTMs provide broad network security coverage, including at least firewall and IPS capabilities, but also usually web filtering, an email security gateway, remote access, and web content filtering (antivirus). These provide a natural location for adding network DLP coverage. Intrusion Detection and Prevention Systems: IDS/IPS tools already perform content inspection, and so are a natural location for additional DLP analysis. This is usually basic analysis integrated into existing policy sets, rather than a new full content analysis engine. SIEM and Log Management: All major SIEM tools can accept alerts from DLP solutions, and many can correlate them with other collected activity. Some SIEM tools also offer DLP features, depending on what kinds of activity they can collect for content analysis. We have placed this in the network section because that’s what they most commonly integrate with, but they can also work with other DLP deployment locations. Log management tools tend to be more passive, but increasingly include some basic DLP-like features for analyzing data. Endpoint Features and Integration DLP features have appeared in various endpoint tools aside from dedicated DLP products since practically before there was a DLP market. This presence continues to expand, especially as interest grows in controlling USB usage without unacceptable business impact. Endpoint Protection Platforms: EPP is the term for comprehensive endpoint suites that start with anti-virus, and may also include portable device control, intrusion prevention, anti-spam, remote access, Network Admission Control, application whitelisting, etc. Many EPP vendors have added basic DLP features – most often for monitoring local files or storage transfers of sensitive information, and some with support for network monitoring and enforcement. USB/Portable Device Control: Some of these tools offer basic DLP capabilities, and we are seeing others evolve to offer somewhat extensive endpoint DLP coverage – with multiple detection techniques, multivariate policies, and even dedicated workflow. When evaluating this option, keep in mind that some tools position themselves as offering DLP capabilities but lack any content analysis – instead relying on metadata or other context. ‘Non-Antivirus’ EPP: Some endpoint security platforms are dedicated to more than just portable device control, but are not designed around antivirus like other EPP tools. This category covers a range of tools, but the features offered are generally comparable to other offerings. Overall, most people deploying DLP features on an endpoint (without a dedicated DLP solution) are focused on scanning the local hard drive and/or monitoring/filtering file transfers to portable storage. But as we described earlier you might also see anything from network filtering to application control integrated into endpoint tools. Storage Features and Integration We don’t see nearly as much DLP Light in storage as in networking and endpoints – in large part because there aren’t as many clear security integration points. Fewer organizations have any sort of storage security monitoring, whereas nearly every organization performs network and endpoint monitoring of some sort. But while we see less DLP Light, as we have already discussed, we see extensive integration on the DLP side for different types of storage repositories. Database Activity Monitoring and Vulnerability Assessment: DAM products, many of which now include or integrate with Database Vulnerability Assessment tools, now sometimes include content analysis capabilities. Vulnerability Assessment: Some vulnerability assessment tools can scan for basic DLP policy violations if they include the ability to passively monitor network traffic or scan storage. Content Classification, Forensics, and Electronic Discovery: These tools aren’t dedicated to

Share:
Read Post

White Paper: Network Security in the Age of *Any* Computing

We all know about the challenges for security professionals posed by mobile devices, and by the need to connect to anything from anywhere. We have done some research on how to start securing those mobile devices, and have broadened that research with a network-centric perspective on these issues. Let’s set the stage for this paper: Everyone loves their iDevices and Androids. The computing power that millions now carry in their pockets would have required a raised floor and a large room full of big iron just 25 years ago. But that’s not the only impact we see from this wave of consumerization, the influx of consumer devices requiring access to corporate networks. Whatever control you thought you had over the devices in the IT environment is gone. End users pick their devices and demand access to critical information within the enterprise. Whether you like it or not. And that’s not all. We also have demands for unfettered access from anywhere in the world at any time of day. And though smart phones are the most visible devices, there are more. We have the ongoing tablet computing invasion (iPad for the win!); and a new generation of workers who demand the ability to choose their computers, mobile devices, and applications. Even better, you aren’t in a position to dictate much of anything moving forward. It’s a great time to be a security professional, right? In this paper we focus on the network architectures and technologies that can help you protect critical corporate data, given your requirements to provide users with access to critical and sensitive information on any device, from anywhere, at any time. A special thanks to ForeScout for sponsoring the research. Find it in the research library or download the PDF directly: Network Security in the Age of Any Computing: Risks and Options to Control Mobile, Wireless, and Endpoint Devices. Share:

Share:
Read Post

Friday Summary: April 1, 2011

Okay folks – raise your hands for this one. How many of you get an obvious spam message from a friend or family member on a weekly basis? For me it’s more like monthly, but it sure is annoying. The problem is that when I get these things I have a tendency to try and run them down to figure out exactly what was compromised. Do the headers show it came from their computer? Or maybe their web-based email account? Or is it just random spoofing from a botnet… which could mean any sort of compromise? Then, assuming I can even figure that part out, I email or call them up to let them know they’ve been hacked. Which instantly turns me into their tech support. This is when things start to suck. Because, for the average person, there isn’t much they can do. They expect their antivirus to work and the initial reaction is usually “I ran a scan and it says I’m clean”. Then I have to tell them that AV doesn’t always work. Which goes over great, as they tell me how much they spent on it. Depending on what I can pick up from the email headers we then get to cover the finer points of changing webmail passwords, checking for silent forwards, and setting recovery accounts. Or maybe I tell them their computer is owned for sure and they need to nuke it from orbit (backup data, wipe it, reinstall, scan data, restore data). None of that is remotely possible for most people, which means they may have to spend more than their PoS is worth paying the Geek Squad to come out, steal their drunken naked pictures, and lose the rest of their data. After which I might still get spam, if the attacker sniffed their address book and shoveled onto some zombie PC(s). Or they ignore me. I had a lawyer friend do that once. On a computer used sometimes for work email. Sigh. There’s really no good answer unless you have a ton of spare time to spend hunting down the compromise… which technically might not be them anyway (no need to send the spam from the person you compromised if another name in the social network might also do the trick). For immediate family I will go fairly deep to run things down (including getting support from vendor friends on occasion), but I have trained most of them. For everyone else? I limit myself to a notification and some basic advice. Then I add them to my spam filter list, because as long as they can still read email and access Facebook they don’t really care. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Mike quoted on metrics in Dark Reading. Adrian quoted in ComputerWorld on McAfee’s acquisition of Sentrigo. Favorite Securosis Posts Rich: PROREALITY: Security is rarely a differentiator. There’s a bare minimum line you need to keep customer trust. Anything more than that rarely matters. Adrian Lane: Captain Obvious Speaks: You Need Layers. Mike Rothman: File Activity Monitoring: Index. You’ll be hearing a lot about FAM in the near future. And you heard it here first. Other Securosis Posts White Paper: Network Security in the Age of Any Computing. Incite 3/30/2011: The Silent Clipper. Comments on Ponemon’s “What Auditors think about Crypto”. Quick Wins with DLP Light. FAM: Policy Creation, Workflow, and Reporting. FAM: Selection Process. Security Benchmarking, Going Beyond Metrics: Introduction. Security Benchmarking, Going Beyond Metrics: Security Metrics (from 40,000 feet). Favorite Outside Posts Rich: Errata Security: “Cybersecurity” and “hacker”: I’m taking them back. If I try to describe what I do (security analyst) they think I’m from Wall St. If I say “cybersecurity analyst” they get it right away. To be honest, I really don’t know why people in the industry hate “cyber”. You dislike Neuromancer or something? Adrian Lane: The 93,000 Firewall Rule Problem. Mike Rothman: The New Corporate Perimeter. If you missed this one, read it. Now. GP is way ahead on thinking about how security architecture must evolve in this mobile/cloud reality. The world is changing, folks – disregard it and I’ve got a front end processor to sell you. Rich: BONUS LINK: The writing process. Oh my. Oh my my my. If you ever write on deadline and word count, you need to read this. Research Reports and Presentations Network Security in the Age of Any Computing. The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Network Security Operations Quant Metrics Model. Network Security Operations Quant Report. Understanding and Selecting a DLP Solution. White Paper: Understanding and Selecting an Enterprise Firewall. Understanding and Selecting a Tokenization Solution. Top News and Posts European Parliament computer network breached. BP loses laptop with private info on 13,000 people. BP Spills Data Too. The DataLossDB project welcomes Dissent! As we mentioned in the intro, you should support this project. GoGrid Security Breach. Restaurant chain fined under Mass privacy law. Mass SQL Injection Attack. NSA Investigates NASDAQ Hack. Dozens of exploits released for popular SCADA programs. Twitter, JavaScript Defeat NYT’s $40m Paywall. Blog Comment of the Week For the past couple years we’ve been donating to Hackers for Charity, but in honor of Dissent joining the DataLossDB project we are directing this week’s donation ($100) to The Open Security Foundation. This week’s best comment goes to SomeSecGuy, in response to PROREALITY: Security is rarely a differentiator. TJ Maxx’s revenues went UP after their big breach. What mattered more to its customers than security? A good deal on clothes, I guess. There probably is a market segment that cares more about security than other factors but I don’t know what it is. Price is typically the primary driver even for business decisions. Share:

Share:
Read Post

PROREALITY: Security is rarely a differentiator

I’ve been in this business a long time – longer than most, though not as long as some. That longevity provides perspective, and has allowed me to observe the pendulum swinging back and forth more than once. This particular pendulum is the security as an enabler concept – you know, positioning security not as an overhead function but as a revenue driver (either direct or indirect). Jeremiah’s post earlier this week, PROTIP: Security as a Differentiator, brought back that periodic (and ultimately fruitless) discussion. His general contention is that security can differentiate an offering, ultimately leading to security being a vehicle that drives revenue. So before we start down this path again, let me squash it like the cockroach it is. First we examine one of Jeremiah’s contentions: When security is made visible (i.e. help customers be and feel safe), the customer may be more inclined to do business with those who clearly take the matter seriously over others who don’t. That’s not entirely false. But the situations (or in marketing speak, segments) where that is true are very limited. Banks have been telling me for years that churn increases after a breach is publicized, and the one which say they are secure gain customers. I still don’t buy it, mostly because the data always seems to come from some vendor pushing their product to protect bank customer data. The reality is that words do not follow behavior when it comes to security. Whether you sit on the vendor side or the user side you know this. When you ask someone if they are worried about security, of course they say yes. Every single time. But when you ask them to change their behavior – or more specifically not do something they want to because it’s a security risk – you see the reality. The vast majority of people don’t care about security enough to do (or not do) anything. Jeremiah is dreaming – if he were describing reality, everyone related to the security business would benefit. Unfortunately it’s more of a PRODREAM than a PROTIP. Or maybe even a PROHALLUCINATION. He’s not high on peyote or anything. Jer is high on the echo chamber. When you hang around all day with people who care about security, you tend to think the echo chamber reflect the mass market. It doesn’t – not by a long shot. So spending a crapload of money on really being secure is a good thing to do. To be clear, I would like you to do that. But don’t do it to win more business – you won’t, and you’ll be disappointed – or your bosses will be disappointed in you for failing to deliver. Invest in security it because it’s the right thing to do. For your customers and for the sustainability of your business. You may not get a lot of revenue upside from being secure, but you can avoid revenue downside. I believe this to be true for most businesses, but not all. Cloud service providers absolutely can differentiate based on security. That will matter to some customers and possibly ease their migration to the cloud. There are other examples of this as well, but not many. I really wish Jeremiah was right. It would be great for everyone. But I’d be irresponsible if I didn’t point out the cold, hard reality. Photo credit: “3 1 10 Bearman Cartoon Cannabis Skunk Hallucinations” originally uploaded by Bearman2007 Share:

Share:
Read Post

On Preboot Authentication and Encryption

I am working on an encryption project – evaluating an upcoming product feature for a vendor – and the research is more interesting than I expected. Not that the feature is uninteresting, but I thought I knew all the answers going into this project. I was wrong. I have been talking with folks on the Twitters and in private interviews, and have discovered that far more organizations than I suspected are configuring their systems to automatically skip preboot authentication and simply boot up into Windows or Mac OS X (yes, for real, a bunch are using disk encryption on Macs). For those of you who don’t know, with most drive encryption you have a mini operating system that boots first, so you can authenticate the user. Then it decrypts and loads the main operating system (Windows, Mac OS X, Linux, etc.). Skipping the mini OS requires you to configure it to automatically authenticate and load the operating system without a password prompt. Organizations tend to do this for a few reasons: So users don’t have to log in twice. So you don’t have to deal with managing and synchronizing two sets of credentials (preboot and OS). To reduce support headaches. But the convenience factor is the real reason. The problem with skipping preboot authentication is that you then rely completely on OS authentication to protect the device. My pentester friends tell me they can pretty much always bypass the OS encryption. This may also be true for a running/sleeping/hibernating system, depending on how you have encryption configured (and how your product works). In other words – if you skip preboot, the encryption generally adds no real security value. In the Twitter discussion about advanced pen testering, our very own David Mortman asked: @rmogull Sure but how many lost/stolen laptops are likely to be attacked in that scenario vs the extra costs of pre-boot? Which is an excellent point. What are the odds of an attacker knowing how to bypass the encryption when preboot isn’t used? And then I realized that in that scenario, the “attacker” is most likely someone picking up a “misplaced” laptop and even basic (non-encryption) OS security is good enough. Which leads to the following decision tree: Are you worried about attackers who can bypass OS authentication? If so, encrypt with preboot authentication; if not, continue to step 2. Do you need to encrypt only for compliance (meaning security isn’t a priority)? If so, encrypt and disable preboot; if not, continue to step 3. Encrypt with preboot authentication. In other words, encrypt if you worry about data loss due to lost media or are required by compliance. If you encrypt for compliance and don’t care about data loss, then you can skip preboot. Share:

Share:
Read Post

Incite 3/30/2011: The Silent Clipper

I’m very fortunate to have inherited Rothman hair, which is gray but plentiful and grows fast. Like fungus. Given my schedule, I tend to wait until things get lost in my hair before I get it cut. Like birds; or yard debris; or Nintendo DS games. A few weeks back the Boss told me to get it cut when I lost my iPhone in my hair. So I arranged a day to hit the barber I have frequented for years. I usually go on Mondays when I can, because his partner is off. These guys have a pretty sophisticated queuing system, honed over 40+ years. Basically you wait until your guy is open. That works fine unless the partner is open and your guy is backed up. Then the partner gives me the evil eye as he listens to his country music. But I have to stay with my guy because he has a vacuum hooked up to his clipper. Yes, I wait for my guy because he uses a professional Flowbee. But when I pulled up the shop was closed. I’ve been going there for 7 years and the shop has never been closed on Monday. Then I looked at the sign, which shows hours only for the partner – my guy’s hours aren’t listed. Rut roh, I got a bad feeling. But I was busy, so I figured I’d go back later in the week and see what happened. I went in Thursday, and my guy wasn’t there. Better yet, the partner was backed up, but I had just lost one of the kids in my hair, so I really needed a cut. I’m quick on the uptake, so I figured something was funky, but all my guy’s stuff was still there – including pictures of his grandkids. It’s like the place that time forgot. But you can’t escape time. It catches everyone. Finally the situation was clarified when a customer came in to pay his respects to the partner. My fears were confirmed: my guy was gone, his trusty clippers silenced. The Google found his obituary. Logically I know death completes the circle of life, and no one can escape. Not even my barber. Truth be told, I was kind of sad. But I probably shouldn’t be. Barber-man lived a good life. He cut hair for decades and enjoyed it. He did real estate as well. He got a new truck every few years, so the shop must have provided OK. He’d talk about his farm, which kept him busy. I can’t say I knew him well, but I’m going to miss him. So out of respect I wait and then sit in the partner’s chair. Interestingly enough he gave me a great cut, even though I was covered in hair without the Flowbee. I was thinking I’d have to find a new guy, but maybe I’ll stick with partner-man. Guess there is a new barber-man in town. Godspeed Richard. Enjoy the next leg of your journey. -Mike Photo credits: “Barber Shop” originally uploaded by David Smith Incite 4 U Can I call you Dr. Hacker?: Very interesting analysis here by Ed Moyle about whether security should be visionary. Personally I don’t know what that means, because our job is to make sure visionary business leaders can do visionary things without having critical IP or private data show up on BitTorrent. But the end of the post on whether security will be innovation-driven (like product development), standards-driven, innovation-averse (like accounting), or standard-driven, innovation-accepting (like medicine) got me thinking. I think we’d like to think we’ll be innovation-driven, but ultimately I suspect we’ll end up like medicine. Everyone still gets sick (because the viruses adapt to our defenses), costs continue to skyrocket, and the government eventually steps in to make everything better. Kill me now, Dr. Hacker. – MR Learn clarity from the (PHP)Fog: One of the things that fascinates me about breaches (and most crisis events) is how the affected react. As I wrote about last week, most people do almost exactly the wrong thing. But as we face two major breaches within our industry, at RSA (“everyone pretend you don’t know what’s going on even though it’s glaringly obvious”), and Comodo (“we were the victim of a state-sponsored attack from Iran, not a teenager, we swear”); perhaps we should learn some lessons from PHPFog (“How We Got Owned by a Few Teenagers (and Why It Will Never Happen Again)”). Honesty is, by far, the best way to maintain the trust of your customers and the public. Especially when you use phrases like, “This was really naive and irresponsible of me.” Treat your customers and the public like adults, not my 2 year old. Especially when maintaining secrecy doesn’t increase their security. – RM MySQL PwNaGe: For the past few days, the news that mysql.com has both a SQL injection vulnerability and a Cross Site Scripting (XSS) vulnerability has been making the rounds. The vulnerabilities are not in the MySQL database engine, but in the site itself. Detailed information from the hacked site was posted on Full Disclosure last Sunday as proof. Appearently the MySQL team was alerted to the issue in January, and this looks like a case of “timely disclosure” – they could have taken the hack further if they wanted. Not much in takeaways from this other than SQL injection is still a leading attack vector and you should have quality passwords to help survive dictionary attacks in the aftermath of a breach. Still no word from Oracle, as there is no acknowledgement of the attack on mysql.com. I wonder if they will deploy a database firewall? – AL APT: The FUD goes on and on and on and on: I applaud Chris Eng’s plea for the industry to stop pushing the APT FUD at all times. He nails the fact that vendors continue to offer solutions to the APT because they don’t want to miss out when the “stop APT project” gets funded. The nebulous definition of APT helps vendors obfuscate the truth, and as Chris points out it frustrates many of us. Yes, we should call out vendors for

Share:
Read Post

Security Benchmarking, Going Beyond Metrics: Security Metrics (from 40,000 feet)

In our introduction to Security Benchmarking, Going Beyond Metrics, we spent some time defining metrics and pointing out that they have multiple consumers, which means we need to package and present the data to these different constituencies. As you’ll see, there is no lack of things to count. But in reality, just because you can count something doesn’t mean you should. So let’s dig a bit into what you can count. Disclaimers: we can only go so deep in a blog series. If you are intent on building a metrics program, you must read Andy Jaquith’s seminal work Security Metrics: Replacing Fear, Uncertainty and Doubt. The book goes into great detail about how to build a security metrics program. The first significant takeaway is how to define a good security metric in the first place: Expressed as numbers Have one or more units of measure Measured in a consistent and objective way Can be gathered cheaply Have contextual relevance Contextual relevance tends to be the hard thing. As Andy says in his March 2010 security metrics article in Information Security magazine: “the metrics must help someone–usually the boss–make a decision about an important security or business issue.” That’s where most security folks tend to fall down, focusing on things that don’t matter, or drawing suspect conclusions from operational data. For example, generating a security posture rating from AV coverage won’t work well. Consensus Metrics We also need to tip our hats to the folks at the Center for Internet Security, who have published a good set of starter security metrics, built via their consensus approach. Also take a look at their QuickStart guide, which does a good job of identifying the process to implement a metrics program. Yes, consensus involves lowest common denominators, and their metrics are no different. But keep things in context: the CIS document provides a place to start, not the definitive list of what you should count. Taking a look at the CIS consensus metrics: Incident Management: Cost of incidents, Mean cost of incidents, Mean incident recovery cost, Mean time to incident discovery, Number of incidents, Mean time between security incidents, Mean time to incident recovery Vulnerability Management: Vulnerability scanning coverage, % systems with no severe vulnerabilities, Mean time to mitigate vulnerabilities, Number of known vulnerabilities, Mean cost to mitigate vulnerabilities Patch Management: Patch policy compliance, Patch management coverage, Mean time to patch, Mean cost to patch Configuration Management: % of configuration compliance, Configuration management coverage, current anti-malware compliance Change Management: Mean time to complete changes, % of changes with security review, % of changes with security exceptions Application security: # of applications, % of critical applications, Application risk access coverage, Application security testing coverage Financial: IT security spending as % of IT budget, IT security budget allocation Obviously there are many other types of information you can collect – particularly from your identity, firewall/IPS, and endpoint management consoles. Depending on your environment these other metrics may be important for operations. We just want to provide a rough sense of the kinds of metrics you can start with. For those gluttons for punishment who really want to dig in we have built Securosis Quant models that document extremely granular process maps and the associated metrics for Patch Management, Network Security Operations (monitoring/managing firewalls and IDS/IPS), and Database Security. We won’t claim all these metrics are perfect. They aren’t even supposed to be – nor are they all relevant to all organizations. But they are a place to start. And most folks don’t know where to start, so this is a good thing. Qualitative ‘Metrics’ I’m very respectful of Andy’s work and his (correct) position regarding the need for any metrics to be numbers and have units of measure. That said, there are some things that aren’t metrics (strictly speaking) but which can still be useful to track, and for benchmarking yourself against other companies. We’ll call these “qualitative metrics,” even though that’s really an oxymoron. Keep in mind that the actual numbers you get for these qualitative assessments isn’t terribly meaningful, but the trend lines are. We’ll discuss how to leverage these ‘metrics’/benchmarks later. But some context on your organization’s awareness and attitudes around security is critical. Awareness: % of employees signing acceptable use policies, % of employees taking security training, % of trained employees passing a security test, % of incidents due to employee error Attitude: % of employees who know there is a security group, % of employees who believe they understand threats to private data, % of employees who believe security hinders their job activities We know what you are thinking. What a load of bunk. And for gauging effectiveness you aren’t wrong. But any security program is about more than just the technical controls – a lot more. So qualitatively understanding the perception, knowledge, and awareness of security among employees is important. Not as important as incident metrics, so we suggest focusing on the technical controls first. But you ignore personnel and attitudes at your own risk. More than a few security folks have been shot down because they failed to pay attention to how they were perceived internally. Again, entire books have been written about security metrics. Our goal is to provide some ideas (and references) for you to understand what you can count, but ultimately what you do count depends on your security program and business imperatives. Next we will focus on how to collect these metrics systematically. Because without your own data, you can’t compare anything. Share:

Share:
Read Post

FAM: Selection Process

Define Needs The first step in the process is to determine your needs, keeping in mind that there are two main drivers for File Activity Monitoring projects, and it’s important to understand the differences and priorities between them: Entitlement management Activity monitoring Most use cases for FAM fall into one of these two categories, such as data owner identification. It’s easy to say “our goal is to audit all user access to files”, but we recommend you get more specific. Why are you monitoring? Is your primary need security or compliance? Are there specific business unit requirements? These answers all help you pick the best solution for your individual requirements. We recommend the following process for this step: Create a selection committee: File Activity Monitoring initiatives tend to involve three major technical stakeholders, plus compliance/legal. On the IT side we typically see security and server and/or storage management involved. This varies considerably, based on the size of the organization and the complexity of the storage infrastructure. For example, it might be the document management system administrators, SharePoint administrators, NAS/storage management, and server administration. The key is to involve the major administrative leads for your storage repositories. You may also need to involve network operations if you plan to use network monitoring. Define the systems and platforms to protect: FAM projects are typically driven by a clear audit or security goal tied to particular storage repositories. In this stage, detail the scope of what will be protected and the technical specifics of the platforms involved. You’ll use this list to determine technical requirements and prioritize features and platform support later. Remember that needs grow over time, so break the list into a group of high-priority systems with immediate requirements, and a second group summarizing all major platforms you may need to protect later. Determine protection and compliance requirements: For some repositories you might want strict preventative security controls, while for others you may just need comprehensive activity monitoring or entitlement management to satisfy a compliance requirement. In this step, map your protection and compliance needs to the platforms and repositories from the previous step. This will help you determine everything from technical requirements to process workflow. Outline process workflow and reporting requirements: File Activity Monitoring workflow varies by use. You will want to define different workflows for entitlement management and activity monitoring, as they may involve different people; that way you can define what you need instead of having the tool determine your process. In most cases, audit, legal, or compliance, have at least some sort of reporting role. Different FAM tools have different strengths and weaknesses in their management interfaces, reporting, and internal workflow, so think through the process before defining technical requirements to prevent headaches down the road. By the end of this phase you should have defined key stakeholders, convened a selection team, prioritized the systems to protect, determined protection requirements, and roughed out process workflow. Formalize Requirements This phase can be performed by a smaller team working under the mandate of the selection committee. Here, the generic needs from phase 1 are translated into specific technical features, and any additional requirements are considered. This is the time to come up with criteria for directory integration, repository platform support, data storage, hierarchical deployments, change management integration, and so on. You can always refine these requirements after you begin the selection process and get a better feel for how the products work. At the conclusion of this stage you will have a formal RFI (Request For Information) for vendors, and a rough RFP (Request For Proposals) to clean up and formally issue in the evaluation phase. Evaluate Products As with any products, it can be difficult to cut through the marketing materials and figure out whether a product really meets your needs. The following steps should minimize your risk and help you feel confident in your final decision: Issue the RFI: Larger organizations should issue an RFI though established channels and contact the leading FAM vendors directly. If you’re a smaller organization start by sending your RFI to a trusted VAR and email the FAM vendors which appear appropriate for your organization. Perform a paper evaluation: Before bringing anyone in, match any materials from the vendor or other sources to your RFI and draft RFP. Currently few vendors are in the FAM market, so your choices will be limited, but you should be fully prepared before you go into any sales situations. Also use outside research sources and product comparisons. Bring in vendors for on-site presentations and demonstrations: Instead of a generic demonstration, ask each vendor to walk through your specific use cases. Don’t expect a full response to your draft RFP – these meetings are to help you better understand the different options and eventually finalize your requirements. Finalize your RFP and issue it to your short list of vendors: At this point you should completely understand your specific requirements and issue a formal, final RFP. Assess RFP responses and begin product testing: Review the RFP results and drop anyone who doesn’t meet any of your minimal requirements (such as platform support), as opposed to ‘nice-to-have’ features. Then bring in any remaining products for in-house testing. You will want to replicate your highest volume system and its traffic if at all possible. Build a few basic policies that match your use cases, and then violate them, so you get a feel for policy creation and workflow. Select, negotiate, and buy: Finish testing, take the results to the full selection committee, and begin negotiating with your top choice. Internal Testing In-house testing is the last chance to find problems in your selection process. Make sure you test the products as thoroughly as possible. And keep in mind that smaller organizations may not have the resources or even the opportunity to test before purchase. A few key aspects to test are: Platform support and installation: Determine agent or integration compatibility (if needed) with your repositories. If you plan to use agents or integrate with a document management system, this is one

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.