Securosis

Research

White Paper: Network Security in the Age of *Any* Computing

We all know about the challenges for security professionals posed by mobile devices, and by the need to connect to anything from anywhere. We have done some research on how to start securing those mobile devices, and have broadened that research with a network-centric perspective on these issues. Let’s set the stage for this paper: Everyone loves their iDevices and Androids. The computing power that millions now carry in their pockets would have required a raised floor and a large room full of big iron just 25 years ago. But that’s not the only impact we see from this wave of consumerization, the influx of consumer devices requiring access to corporate networks. Whatever control you thought you had over the devices in the IT environment is gone. End users pick their devices and demand access to critical information within the enterprise. Whether you like it or not. And that’s not all. We also have demands for unfettered access from anywhere in the world at any time of day. And though smart phones are the most visible devices, there are more. We have the ongoing tablet computing invasion (iPad for the win!); and a new generation of workers who demand the ability to choose their computers, mobile devices, and applications. Even better, you aren’t in a position to dictate much of anything moving forward. It’s a great time to be a security professional, right? In this paper we focus on the network architectures and technologies that can help you protect critical corporate data, given your requirements to provide users with access to critical and sensitive information on any device, from anywhere, at any time. A special thanks to ForeScout for sponsoring the research. Find it in the research library or download the PDF directly: Network Security in the Age of Any Computing: Risks and Options to Control Mobile, Wireless, and Endpoint Devices. Share:

Share:
Read Post

Friday Summary: April 1, 2011

Okay folks – raise your hands for this one. How many of you get an obvious spam message from a friend or family member on a weekly basis? For me it’s more like monthly, but it sure is annoying. The problem is that when I get these things I have a tendency to try and run them down to figure out exactly what was compromised. Do the headers show it came from their computer? Or maybe their web-based email account? Or is it just random spoofing from a botnet… which could mean any sort of compromise? Then, assuming I can even figure that part out, I email or call them up to let them know they’ve been hacked. Which instantly turns me into their tech support. This is when things start to suck. Because, for the average person, there isn’t much they can do. They expect their antivirus to work and the initial reaction is usually “I ran a scan and it says I’m clean”. Then I have to tell them that AV doesn’t always work. Which goes over great, as they tell me how much they spent on it. Depending on what I can pick up from the email headers we then get to cover the finer points of changing webmail passwords, checking for silent forwards, and setting recovery accounts. Or maybe I tell them their computer is owned for sure and they need to nuke it from orbit (backup data, wipe it, reinstall, scan data, restore data). None of that is remotely possible for most people, which means they may have to spend more than their PoS is worth paying the Geek Squad to come out, steal their drunken naked pictures, and lose the rest of their data. After which I might still get spam, if the attacker sniffed their address book and shoveled onto some zombie PC(s). Or they ignore me. I had a lawyer friend do that once. On a computer used sometimes for work email. Sigh. There’s really no good answer unless you have a ton of spare time to spend hunting down the compromise… which technically might not be them anyway (no need to send the spam from the person you compromised if another name in the social network might also do the trick). For immediate family I will go fairly deep to run things down (including getting support from vendor friends on occasion), but I have trained most of them. For everyone else? I limit myself to a notification and some basic advice. Then I add them to my spam filter list, because as long as they can still read email and access Facebook they don’t really care. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Mike quoted on metrics in Dark Reading. Adrian quoted in ComputerWorld on McAfee’s acquisition of Sentrigo. Favorite Securosis Posts Rich: PROREALITY: Security is rarely a differentiator. There’s a bare minimum line you need to keep customer trust. Anything more than that rarely matters. Adrian Lane: Captain Obvious Speaks: You Need Layers. Mike Rothman: File Activity Monitoring: Index. You’ll be hearing a lot about FAM in the near future. And you heard it here first. Other Securosis Posts White Paper: Network Security in the Age of Any Computing. Incite 3/30/2011: The Silent Clipper. Comments on Ponemon’s “What Auditors think about Crypto”. Quick Wins with DLP Light. FAM: Policy Creation, Workflow, and Reporting. FAM: Selection Process. Security Benchmarking, Going Beyond Metrics: Introduction. Security Benchmarking, Going Beyond Metrics: Security Metrics (from 40,000 feet). Favorite Outside Posts Rich: Errata Security: “Cybersecurity” and “hacker”: I’m taking them back. If I try to describe what I do (security analyst) they think I’m from Wall St. If I say “cybersecurity analyst” they get it right away. To be honest, I really don’t know why people in the industry hate “cyber”. You dislike Neuromancer or something? Adrian Lane: The 93,000 Firewall Rule Problem. Mike Rothman: The New Corporate Perimeter. If you missed this one, read it. Now. GP is way ahead on thinking about how security architecture must evolve in this mobile/cloud reality. The world is changing, folks – disregard it and I’ve got a front end processor to sell you. Rich: BONUS LINK: The writing process. Oh my. Oh my my my. If you ever write on deadline and word count, you need to read this. Research Reports and Presentations Network Security in the Age of Any Computing. The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Network Security Operations Quant Metrics Model. Network Security Operations Quant Report. Understanding and Selecting a DLP Solution. White Paper: Understanding and Selecting an Enterprise Firewall. Understanding and Selecting a Tokenization Solution. Top News and Posts European Parliament computer network breached. BP loses laptop with private info on 13,000 people. BP Spills Data Too. The DataLossDB project welcomes Dissent! As we mentioned in the intro, you should support this project. GoGrid Security Breach. Restaurant chain fined under Mass privacy law. Mass SQL Injection Attack. NSA Investigates NASDAQ Hack. Dozens of exploits released for popular SCADA programs. Twitter, JavaScript Defeat NYT’s $40m Paywall. Blog Comment of the Week For the past couple years we’ve been donating to Hackers for Charity, but in honor of Dissent joining the DataLossDB project we are directing this week’s donation ($100) to The Open Security Foundation. This week’s best comment goes to SomeSecGuy, in response to PROREALITY: Security is rarely a differentiator. TJ Maxx’s revenues went UP after their big breach. What mattered more to its customers than security? A good deal on clothes, I guess. There probably is a market segment that cares more about security than other factors but I don’t know what it is. Price is typically the primary driver even for business decisions. Share:

Share:
Read Post

PROREALITY: Security is rarely a differentiator

I’ve been in this business a long time – longer than most, though not as long as some. That longevity provides perspective, and has allowed me to observe the pendulum swinging back and forth more than once. This particular pendulum is the security as an enabler concept – you know, positioning security not as an overhead function but as a revenue driver (either direct or indirect). Jeremiah’s post earlier this week, PROTIP: Security as a Differentiator, brought back that periodic (and ultimately fruitless) discussion. His general contention is that security can differentiate an offering, ultimately leading to security being a vehicle that drives revenue. So before we start down this path again, let me squash it like the cockroach it is. First we examine one of Jeremiah’s contentions: When security is made visible (i.e. help customers be and feel safe), the customer may be more inclined to do business with those who clearly take the matter seriously over others who don’t. That’s not entirely false. But the situations (or in marketing speak, segments) where that is true are very limited. Banks have been telling me for years that churn increases after a breach is publicized, and the one which say they are secure gain customers. I still don’t buy it, mostly because the data always seems to come from some vendor pushing their product to protect bank customer data. The reality is that words do not follow behavior when it comes to security. Whether you sit on the vendor side or the user side you know this. When you ask someone if they are worried about security, of course they say yes. Every single time. But when you ask them to change their behavior – or more specifically not do something they want to because it’s a security risk – you see the reality. The vast majority of people don’t care about security enough to do (or not do) anything. Jeremiah is dreaming – if he were describing reality, everyone related to the security business would benefit. Unfortunately it’s more of a PRODREAM than a PROTIP. Or maybe even a PROHALLUCINATION. He’s not high on peyote or anything. Jer is high on the echo chamber. When you hang around all day with people who care about security, you tend to think the echo chamber reflect the mass market. It doesn’t – not by a long shot. So spending a crapload of money on really being secure is a good thing to do. To be clear, I would like you to do that. But don’t do it to win more business – you won’t, and you’ll be disappointed – or your bosses will be disappointed in you for failing to deliver. Invest in security it because it’s the right thing to do. For your customers and for the sustainability of your business. You may not get a lot of revenue upside from being secure, but you can avoid revenue downside. I believe this to be true for most businesses, but not all. Cloud service providers absolutely can differentiate based on security. That will matter to some customers and possibly ease their migration to the cloud. There are other examples of this as well, but not many. I really wish Jeremiah was right. It would be great for everyone. But I’d be irresponsible if I didn’t point out the cold, hard reality. Photo credit: “3 1 10 Bearman Cartoon Cannabis Skunk Hallucinations” originally uploaded by Bearman2007 Share:

Share:
Read Post

On Preboot Authentication and Encryption

I am working on an encryption project – evaluating an upcoming product feature for a vendor – and the research is more interesting than I expected. Not that the feature is uninteresting, but I thought I knew all the answers going into this project. I was wrong. I have been talking with folks on the Twitters and in private interviews, and have discovered that far more organizations than I suspected are configuring their systems to automatically skip preboot authentication and simply boot up into Windows or Mac OS X (yes, for real, a bunch are using disk encryption on Macs). For those of you who don’t know, with most drive encryption you have a mini operating system that boots first, so you can authenticate the user. Then it decrypts and loads the main operating system (Windows, Mac OS X, Linux, etc.). Skipping the mini OS requires you to configure it to automatically authenticate and load the operating system without a password prompt. Organizations tend to do this for a few reasons: So users don’t have to log in twice. So you don’t have to deal with managing and synchronizing two sets of credentials (preboot and OS). To reduce support headaches. But the convenience factor is the real reason. The problem with skipping preboot authentication is that you then rely completely on OS authentication to protect the device. My pentester friends tell me they can pretty much always bypass the OS encryption. This may also be true for a running/sleeping/hibernating system, depending on how you have encryption configured (and how your product works). In other words – if you skip preboot, the encryption generally adds no real security value. In the Twitter discussion about advanced pen testering, our very own David Mortman asked: @rmogull Sure but how many lost/stolen laptops are likely to be attacked in that scenario vs the extra costs of pre-boot? Which is an excellent point. What are the odds of an attacker knowing how to bypass the encryption when preboot isn’t used? And then I realized that in that scenario, the “attacker” is most likely someone picking up a “misplaced” laptop and even basic (non-encryption) OS security is good enough. Which leads to the following decision tree: Are you worried about attackers who can bypass OS authentication? If so, encrypt with preboot authentication; if not, continue to step 2. Do you need to encrypt only for compliance (meaning security isn’t a priority)? If so, encrypt and disable preboot; if not, continue to step 3. Encrypt with preboot authentication. In other words, encrypt if you worry about data loss due to lost media or are required by compliance. If you encrypt for compliance and don’t care about data loss, then you can skip preboot. Share:

Share:
Read Post

Incite 3/30/2011: The Silent Clipper

I’m very fortunate to have inherited Rothman hair, which is gray but plentiful and grows fast. Like fungus. Given my schedule, I tend to wait until things get lost in my hair before I get it cut. Like birds; or yard debris; or Nintendo DS games. A few weeks back the Boss told me to get it cut when I lost my iPhone in my hair. So I arranged a day to hit the barber I have frequented for years. I usually go on Mondays when I can, because his partner is off. These guys have a pretty sophisticated queuing system, honed over 40+ years. Basically you wait until your guy is open. That works fine unless the partner is open and your guy is backed up. Then the partner gives me the evil eye as he listens to his country music. But I have to stay with my guy because he has a vacuum hooked up to his clipper. Yes, I wait for my guy because he uses a professional Flowbee. But when I pulled up the shop was closed. I’ve been going there for 7 years and the shop has never been closed on Monday. Then I looked at the sign, which shows hours only for the partner – my guy’s hours aren’t listed. Rut roh, I got a bad feeling. But I was busy, so I figured I’d go back later in the week and see what happened. I went in Thursday, and my guy wasn’t there. Better yet, the partner was backed up, but I had just lost one of the kids in my hair, so I really needed a cut. I’m quick on the uptake, so I figured something was funky, but all my guy’s stuff was still there – including pictures of his grandkids. It’s like the place that time forgot. But you can’t escape time. It catches everyone. Finally the situation was clarified when a customer came in to pay his respects to the partner. My fears were confirmed: my guy was gone, his trusty clippers silenced. The Google found his obituary. Logically I know death completes the circle of life, and no one can escape. Not even my barber. Truth be told, I was kind of sad. But I probably shouldn’t be. Barber-man lived a good life. He cut hair for decades and enjoyed it. He did real estate as well. He got a new truck every few years, so the shop must have provided OK. He’d talk about his farm, which kept him busy. I can’t say I knew him well, but I’m going to miss him. So out of respect I wait and then sit in the partner’s chair. Interestingly enough he gave me a great cut, even though I was covered in hair without the Flowbee. I was thinking I’d have to find a new guy, but maybe I’ll stick with partner-man. Guess there is a new barber-man in town. Godspeed Richard. Enjoy the next leg of your journey. -Mike Photo credits: “Barber Shop” originally uploaded by David Smith Incite 4 U Can I call you Dr. Hacker?: Very interesting analysis here by Ed Moyle about whether security should be visionary. Personally I don’t know what that means, because our job is to make sure visionary business leaders can do visionary things without having critical IP or private data show up on BitTorrent. But the end of the post on whether security will be innovation-driven (like product development), standards-driven, innovation-averse (like accounting), or standard-driven, innovation-accepting (like medicine) got me thinking. I think we’d like to think we’ll be innovation-driven, but ultimately I suspect we’ll end up like medicine. Everyone still gets sick (because the viruses adapt to our defenses), costs continue to skyrocket, and the government eventually steps in to make everything better. Kill me now, Dr. Hacker. – MR Learn clarity from the (PHP)Fog: One of the things that fascinates me about breaches (and most crisis events) is how the affected react. As I wrote about last week, most people do almost exactly the wrong thing. But as we face two major breaches within our industry, at RSA (“everyone pretend you don’t know what’s going on even though it’s glaringly obvious”), and Comodo (“we were the victim of a state-sponsored attack from Iran, not a teenager, we swear”); perhaps we should learn some lessons from PHPFog (“How We Got Owned by a Few Teenagers (and Why It Will Never Happen Again)”). Honesty is, by far, the best way to maintain the trust of your customers and the public. Especially when you use phrases like, “This was really naive and irresponsible of me.” Treat your customers and the public like adults, not my 2 year old. Especially when maintaining secrecy doesn’t increase their security. – RM MySQL PwNaGe: For the past few days, the news that mysql.com has both a SQL injection vulnerability and a Cross Site Scripting (XSS) vulnerability has been making the rounds. The vulnerabilities are not in the MySQL database engine, but in the site itself. Detailed information from the hacked site was posted on Full Disclosure last Sunday as proof. Appearently the MySQL team was alerted to the issue in January, and this looks like a case of “timely disclosure” – they could have taken the hack further if they wanted. Not much in takeaways from this other than SQL injection is still a leading attack vector and you should have quality passwords to help survive dictionary attacks in the aftermath of a breach. Still no word from Oracle, as there is no acknowledgement of the attack on mysql.com. I wonder if they will deploy a database firewall? – AL APT: The FUD goes on and on and on and on: I applaud Chris Eng’s plea for the industry to stop pushing the APT FUD at all times. He nails the fact that vendors continue to offer solutions to the APT because they don’t want to miss out when the “stop APT project” gets funded. The nebulous definition of APT helps vendors obfuscate the truth, and as Chris points out it frustrates many of us. Yes, we should call out vendors for

Share:
Read Post

Security Benchmarking, Going Beyond Metrics: Security Metrics (from 40,000 feet)

In our introduction to Security Benchmarking, Going Beyond Metrics, we spent some time defining metrics and pointing out that they have multiple consumers, which means we need to package and present the data to these different constituencies. As you’ll see, there is no lack of things to count. But in reality, just because you can count something doesn’t mean you should. So let’s dig a bit into what you can count. Disclaimers: we can only go so deep in a blog series. If you are intent on building a metrics program, you must read Andy Jaquith’s seminal work Security Metrics: Replacing Fear, Uncertainty and Doubt. The book goes into great detail about how to build a security metrics program. The first significant takeaway is how to define a good security metric in the first place: Expressed as numbers Have one or more units of measure Measured in a consistent and objective way Can be gathered cheaply Have contextual relevance Contextual relevance tends to be the hard thing. As Andy says in his March 2010 security metrics article in Information Security magazine: “the metrics must help someone–usually the boss–make a decision about an important security or business issue.” That’s where most security folks tend to fall down, focusing on things that don’t matter, or drawing suspect conclusions from operational data. For example, generating a security posture rating from AV coverage won’t work well. Consensus Metrics We also need to tip our hats to the folks at the Center for Internet Security, who have published a good set of starter security metrics, built via their consensus approach. Also take a look at their QuickStart guide, which does a good job of identifying the process to implement a metrics program. Yes, consensus involves lowest common denominators, and their metrics are no different. But keep things in context: the CIS document provides a place to start, not the definitive list of what you should count. Taking a look at the CIS consensus metrics: Incident Management: Cost of incidents, Mean cost of incidents, Mean incident recovery cost, Mean time to incident discovery, Number of incidents, Mean time between security incidents, Mean time to incident recovery Vulnerability Management: Vulnerability scanning coverage, % systems with no severe vulnerabilities, Mean time to mitigate vulnerabilities, Number of known vulnerabilities, Mean cost to mitigate vulnerabilities Patch Management: Patch policy compliance, Patch management coverage, Mean time to patch, Mean cost to patch Configuration Management: % of configuration compliance, Configuration management coverage, current anti-malware compliance Change Management: Mean time to complete changes, % of changes with security review, % of changes with security exceptions Application security: # of applications, % of critical applications, Application risk access coverage, Application security testing coverage Financial: IT security spending as % of IT budget, IT security budget allocation Obviously there are many other types of information you can collect – particularly from your identity, firewall/IPS, and endpoint management consoles. Depending on your environment these other metrics may be important for operations. We just want to provide a rough sense of the kinds of metrics you can start with. For those gluttons for punishment who really want to dig in we have built Securosis Quant models that document extremely granular process maps and the associated metrics for Patch Management, Network Security Operations (monitoring/managing firewalls and IDS/IPS), and Database Security. We won’t claim all these metrics are perfect. They aren’t even supposed to be – nor are they all relevant to all organizations. But they are a place to start. And most folks don’t know where to start, so this is a good thing. Qualitative ‘Metrics’ I’m very respectful of Andy’s work and his (correct) position regarding the need for any metrics to be numbers and have units of measure. That said, there are some things that aren’t metrics (strictly speaking) but which can still be useful to track, and for benchmarking yourself against other companies. We’ll call these “qualitative metrics,” even though that’s really an oxymoron. Keep in mind that the actual numbers you get for these qualitative assessments isn’t terribly meaningful, but the trend lines are. We’ll discuss how to leverage these ‘metrics’/benchmarks later. But some context on your organization’s awareness and attitudes around security is critical. Awareness: % of employees signing acceptable use policies, % of employees taking security training, % of trained employees passing a security test, % of incidents due to employee error Attitude: % of employees who know there is a security group, % of employees who believe they understand threats to private data, % of employees who believe security hinders their job activities We know what you are thinking. What a load of bunk. And for gauging effectiveness you aren’t wrong. But any security program is about more than just the technical controls – a lot more. So qualitatively understanding the perception, knowledge, and awareness of security among employees is important. Not as important as incident metrics, so we suggest focusing on the technical controls first. But you ignore personnel and attitudes at your own risk. More than a few security folks have been shot down because they failed to pay attention to how they were perceived internally. Again, entire books have been written about security metrics. Our goal is to provide some ideas (and references) for you to understand what you can count, but ultimately what you do count depends on your security program and business imperatives. Next we will focus on how to collect these metrics systematically. Because without your own data, you can’t compare anything. Share:

Share:
Read Post

FAM: Selection Process

Define Needs The first step in the process is to determine your needs, keeping in mind that there are two main drivers for File Activity Monitoring projects, and it’s important to understand the differences and priorities between them: Entitlement management Activity monitoring Most use cases for FAM fall into one of these two categories, such as data owner identification. It’s easy to say “our goal is to audit all user access to files”, but we recommend you get more specific. Why are you monitoring? Is your primary need security or compliance? Are there specific business unit requirements? These answers all help you pick the best solution for your individual requirements. We recommend the following process for this step: Create a selection committee: File Activity Monitoring initiatives tend to involve three major technical stakeholders, plus compliance/legal. On the IT side we typically see security and server and/or storage management involved. This varies considerably, based on the size of the organization and the complexity of the storage infrastructure. For example, it might be the document management system administrators, SharePoint administrators, NAS/storage management, and server administration. The key is to involve the major administrative leads for your storage repositories. You may also need to involve network operations if you plan to use network monitoring. Define the systems and platforms to protect: FAM projects are typically driven by a clear audit or security goal tied to particular storage repositories. In this stage, detail the scope of what will be protected and the technical specifics of the platforms involved. You’ll use this list to determine technical requirements and prioritize features and platform support later. Remember that needs grow over time, so break the list into a group of high-priority systems with immediate requirements, and a second group summarizing all major platforms you may need to protect later. Determine protection and compliance requirements: For some repositories you might want strict preventative security controls, while for others you may just need comprehensive activity monitoring or entitlement management to satisfy a compliance requirement. In this step, map your protection and compliance needs to the platforms and repositories from the previous step. This will help you determine everything from technical requirements to process workflow. Outline process workflow and reporting requirements: File Activity Monitoring workflow varies by use. You will want to define different workflows for entitlement management and activity monitoring, as they may involve different people; that way you can define what you need instead of having the tool determine your process. In most cases, audit, legal, or compliance, have at least some sort of reporting role. Different FAM tools have different strengths and weaknesses in their management interfaces, reporting, and internal workflow, so think through the process before defining technical requirements to prevent headaches down the road. By the end of this phase you should have defined key stakeholders, convened a selection team, prioritized the systems to protect, determined protection requirements, and roughed out process workflow. Formalize Requirements This phase can be performed by a smaller team working under the mandate of the selection committee. Here, the generic needs from phase 1 are translated into specific technical features, and any additional requirements are considered. This is the time to come up with criteria for directory integration, repository platform support, data storage, hierarchical deployments, change management integration, and so on. You can always refine these requirements after you begin the selection process and get a better feel for how the products work. At the conclusion of this stage you will have a formal RFI (Request For Information) for vendors, and a rough RFP (Request For Proposals) to clean up and formally issue in the evaluation phase. Evaluate Products As with any products, it can be difficult to cut through the marketing materials and figure out whether a product really meets your needs. The following steps should minimize your risk and help you feel confident in your final decision: Issue the RFI: Larger organizations should issue an RFI though established channels and contact the leading FAM vendors directly. If you’re a smaller organization start by sending your RFI to a trusted VAR and email the FAM vendors which appear appropriate for your organization. Perform a paper evaluation: Before bringing anyone in, match any materials from the vendor or other sources to your RFI and draft RFP. Currently few vendors are in the FAM market, so your choices will be limited, but you should be fully prepared before you go into any sales situations. Also use outside research sources and product comparisons. Bring in vendors for on-site presentations and demonstrations: Instead of a generic demonstration, ask each vendor to walk through your specific use cases. Don’t expect a full response to your draft RFP – these meetings are to help you better understand the different options and eventually finalize your requirements. Finalize your RFP and issue it to your short list of vendors: At this point you should completely understand your specific requirements and issue a formal, final RFP. Assess RFP responses and begin product testing: Review the RFP results and drop anyone who doesn’t meet any of your minimal requirements (such as platform support), as opposed to ‘nice-to-have’ features. Then bring in any remaining products for in-house testing. You will want to replicate your highest volume system and its traffic if at all possible. Build a few basic policies that match your use cases, and then violate them, so you get a feel for policy creation and workflow. Select, negotiate, and buy: Finish testing, take the results to the full selection committee, and begin negotiating with your top choice. Internal Testing In-house testing is the last chance to find problems in your selection process. Make sure you test the products as thoroughly as possible. And keep in mind that smaller organizations may not have the resources or even the opportunity to test before purchase. A few key aspects to test are: Platform support and installation: Determine agent or integration compatibility (if needed) with your repositories. If you plan to use agents or integrate with a document management system, this is one

Share:
Read Post

File Activity Monitoring Series Complete (Index)

Once again, I have knocked off a series of posts for a new white paper. The title is “Understanding and Selecting a File Activity Monitoring Solution”. Although there are only a few vendors in the market, this is a technology I have been waiting a few years for, and I think it’s pretty useful. There are basically two sides to it: Entitlement management: Collecting all user privileges in monitored file repositories, linking into your directory servers, and giving you a highly simplified process for cleaning up all the messes and managing things better when moving forward. Activity monitoring: Full activity monitoring for all your file repositories in scope… including alerting for policy violations. It’s pretty cool stuff – imagine setting a policy to alert you any time someone copies an entire directory off the server instead of a single file. Or copying 30 files in a day, when they normally only open 1 or 2. And that’s just scratching the surface of the potential. The links to all the posts are below, and I could use any feedback you have before we convert this puppy to a paper and post it. (If you are seeing this in RSS, you will have to click the post to see all the links, because I’m too lazy to add them in manually). Share:

Share:
Read Post

Comments on Ponemon’s “What Auditors think about Crypto”

The Ponemon Institute has released a white paper, What auditors think about Crypto (registration required). I downloaded and took a cursory look at their results. My summary of their report is “IT auditors rely on encryption, but key management can be really hard”. No shock there. A client passed along a TechTarget blog post where Larry Ponemon is quoted as saying auditors prefer encryption , but worded to make their study sound like a comparison between encryption and tokenization. So I dove deep into their contents to see if I missed something. Nope. The study does not compare encryption to tokenization, and Larry’s juxtaposition implies it is. The quotes from the TechTarget post are as follows: Encryption has always been a coveted technology to auditors, but organizations that have problems with key management may view tokenization as a good alternative and Tokenization is an up and coming technology; we think PCI DSS and some other compliance requirements will allow tokenization as a solid alternative to encryption. and In general auditors in our study still favor encryption in all the different use cases that we examined, Which are all technically true but misleading. If you had to choose one technology over another for all use cases, I don’t know of a security professional who wouldn’t choose encryption, but that’s not a head to head comparison. Tokenization is a data replacement technology; encryption is a data obfuscation technology. They serve different purposes. Think about it this way: There is no practical way for tokenization to protect your network traffic, and it would be a horrible strategy for protecting backup tapes. You can’t build a VPN with tokenization – the best you could do would be to use access tokens from a Kerberos-like service. That does not mean tokenization won’t be the best way to secure data at rest security now or in the future. Acknowledging that encryption is essential sometimes and that auditors rely on it is a long way from establishing that encryption is better or preferable technology in the abstract. Larry’s conclusion is specious. Let’s be clear: the vast majority of discussion around tokenization today has to do with credit card replacement for PCI compliance. The other forms of tokens used for access and authorization have been around for many years and are niche technologies. It’s just not meaningful to compare cryptography in general against tokenization within PCI deployments. A meaningful comparison of popularity between encryption and tokenization, would need to be confined to areas where they can solve equivalent business problems. That’s not GLBA, SOX, FISMA, or even HIPAA; currently it’s specifically PCI-DSS. Note that only 24% of those surveyed were PCI assessors. They look at credit card security on a daily basis, and compare relative merits of the two technologies for the same use case. 64% had over ten years experience, but PCI audits have been common for less than 5. The survey population is clearly general auditors, which doesn’t seem to be an appropriate audience for ascertaining the popularity of tokenization – especially if they were thinking of authorization tokens when answering the survey. Of customers I have spoken to, who want to know about tokenization, more than 70% intend to use tokenization to help reduce the scope of PCI compliance. Certainly my sample size is smaller than the Ponemon survey’s. And the folks I speak with are in retail and finance, so subject to PCI-DSS compliance. At Securosis we predict that tokenization will replace encryption in many PCI-DSS regulated systems. The bulk of encryption installations, having nothing to do with PCI-DSS and being inappropriate use cases for tokenization, however, will be unchanged. At a macro level these technologies go hand in hand. But as tokenization grows in popularity, in suitable situations it will often be chosen over encryption. Note that encryption systems require some form of key management. Thales, the sponsor of Ponemon’s study, is a key vendor in the HSM space which dominates key management for encryption deployments. Finally, there is some useful information in the report. It’s worth a few minutes to review, to look get some insight into decision makers and where funding is coming from. But it’s just not possible to make a valid comparison between tokenization and encryption from this data. Share:

Share:
Read Post

FAM: Policy Creation, Workflow, and Reporting

Now that we have covered the base features it’s time to consider how these tie in with policies, workflow, and reporting. We’ll focus on the features needed to support these processes rather than defining the processes themselves. Policy Creation File Activity Monitoring products support two major categories of policies: Entitlement (Permissions/Access Control) policies. These define which users can access which repositories and types of data. They define rules for things like orphaned user accounts, separation of duties, role/group conflicts, and other situations that don’t require real-time file activity. Activity-based polices. These alert and block based on real-time user activity. When evaluating products, look for a few key features to help with policy creation and management: Policy templates that serve as examples and baselines for building your own policies. A clean user interface that allows you to understand business context. For example, it should allow you to group categories, pool users and groups to speed up policy application (e.g., combine all the different accounting related groups into “Accounting”), and group and label repositories. This is especially important given the volume of entries to manage when you integrate with large user directories and multi-terabyte repositories. New policy wizards to speed up policy creation. Hierarchical management for multiple FAMs in the same organization. Role-based administration, including roles for super administrators and assigning policies to sub-administrators. Policy backup and restore. Workflow As with policy creation, we see workflow requirements focusing on the two major functions of FAM: entitlement management and activity monitoring. Entitlement Management This workflow should support a closed-loop process for collection of privileges, analysis, and application of policy-based changes. Your tool should do more than merely collect access rights – it should help you build a process to ensure that access controls match your policies. This is typically a combination of different workflows for different goals – including identification of orphan accounts with access to sensitive data, excessive privileges, conflict of interest/separation of duties based on user groups, and restricting access to sensitive repositories. Each product and policy will be different, but they typically share a common pattern: Collect existing entitlements. Analyze based on policies. Apply corrective actions (either building an alerting/blocking policy or changing privileges). Generate a report of identified and remediated issues. The workflow should also link into data owner identification because this must often be understood before changing rights. Activity Monitoring and Protection The activity monitoring workflow is very different than entitlement management. Here the focus is on handling alerts and incidents in real time. The key interface is the incident handling queue that’s common to most security tools. The queue lists incidents and supports various sorting and filtering options. The workflow tends to follow the following structure: Incident occurs and alert appears in the queue. It is displayed with the user, policy violated, and repository or file involved. The incident handler can investigate further by filtering for other activity involving that user, that repository, or that policy over a given time period (or various combinations). The handler can assign or escalate the incident to someone else, close the incident, or take corrective actions such as adjusting the file permissions manually. The key to keeping this efficient is not requiring the incident handler to jump around the user interface in a manual process. For example, clicking on an incident should show its details and then links to see other related incidents by user, policy, and repository. Incidents should also be grouped logically – an attempt to copy an entire directory should appear as one incident, not one incident for each of 1,000 files in the repository. Any FAM product may also include additional workflows, such as one for identifying file owners. Reporting One of the most important functions for any File Activity Monitoring product is robust reporting – this is particularly important for meeting compliance requirements. Aside from a repository of pre-defined reports for common requirements such as PCI and HIPAA, the tool should allow you to generate arbitrary reports. (We hate to list that as a requirement, but we still occasionally see security tools that don’t support creation of arbitrary reports). Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.