By Adrian Lane
Filling out one of our last steps in the Database Security process Framework is the Patch Management task of the Manage phase. There is really no need to re-invent the wheel here, so we will follow the process outlined in the original Project Quant for Patch Management project conducted in 2009. I have adjusted several of the tasks to be database specific, but the process as a whole is the same.
The beautiful thing about processes is, since they are a framework to guide our efforts and interactions with others, we can adjust them in whatever way that suits our needs. Add or remove steps to fit your organization’s size and dependencies. And I did just that in Database Security Fundamentals for Patching for small and medium businesses. But here it is best to examine this task at a high level because patch management takes far more time and resources than typical estimates account for. It’s not just the installation of the patches – but the review and certification tests to ensure that your production environment does not come to a crashing halt – that cost so much in time, organization, and automation tools. You will need to spend more time on this task than the others we have already discussed.
Make no mistake – patching is a critical security operation for databases. The vast majority of security concerns and logic flaws within the database will be addressed by the database vendor. You may have workarounds, or be able to mask some flaws with third party security products, but the vendor is the only way to really ‘fix’ database security issues. That means you will be patching on a regular basis to address 0-days just as you do with ‘Priority 1’ functional issues. Database vendors have dedicated security teams to analyze attacks against their databases, and small firms must leverage their expertise. But you still need to manage the updates in a predictable fashion that does not disrupt business functions.
The following is our outline of the high level steps, with an itemization of the costs you want to consider when accounting for your database patch management process.
- Monitor for Release/Advisory: Time to gather patch release and associated data. Each of the database vendors follows a different process, but most provide patch pre-notification alerts and notification when functional and security patches are available, and do so in predictable cycles.
- Acquire: Time to get the patch. Download patch and documentation.
- Evaluate: Time to perform the initial ‘paper’ evaluation of the patch. What’s it for? Is it security-sensitive? Do we use that software? Is the issue relevant in our environment? Are there workarounds or dependencies? If the patch is appropriate, continue.
- Schedule: Time to coordinate with other groups to schedule testing and deployment. Prioritize based on the nature of the patch itself, and your infrastructure/assets. Then build out a deployment schedule based on your prioritization.
- Test and Certify: Time to perform any required testing, and certify the patch for release. Remember to include time to add or update test cases, and if you use masked production data, extract & load of new data set. Verify functional tests pass and meet functional requirements. If the tests pass continue with this process; otherwise clean up the failed test area. Factor in the cost of tools or services.
- Create Deployment Package: Prepare the patch for deployment.
- Confirm Deployment: Time to verify that patches were properly deployed. This includes use of configuration management or vulnerability assessment tools, as well as functional ‘sanity’ tests.
- Clean up: Time to clean up any bad deployments, remnants of the patch application procedure, rollbacks, and any other associated cruft/detritus.
- Document and Update Configuration Standards: Time to document the patch deployment (which may be required for regulatory compliance) and update any associated configuration standards/guidelines/requirements. Save the patch and documentation in a safe archive area so the update process can be repeated in a consistent fashion.
Posted at Tuesday 23rd March 2010 5:50 pm
(0) Comments •
I often hear that there is no innovation left in security.
That’s complete bullshit.
There is plenty of innovation in security – but more often than not there’s no market for that innovation.
For anything innovative to survive (at least in terms of physical goods and software) it needs to have a market. Sometimes, as with the motion controllers of the Nintendo Wii, it disrupts an existing market by creating new value. In other cases, the innovation taps into unknown needs or desires and succeeds by creating a new market.
Security is a bit of a tougher nut. As I’ve discussed before, both on this blog and in the Disruptive Innovation talk I give with Chris Hoff, security is reactive by nature. We are constantly responding to changes in the underlying processes/organizations we protect, as well as to threats evolving to find new pathways through our defenses. With very few exceptions, we rarely invest in security to reduce risks we aren’t currently observing. If it isn’t a clear, present, and noisy danger, it usually finds itself on the back burner.
Innovations like firewalls and antivirus really only succeeded when the environment created conditions that showed off value in these tools. Typically that value is in stopping pain, and not every injury causes pain. Even when we are proactive, there’s only a market for the reactive. The pain must pass a threshold to justify investment, and an innovator can only survive for so long without customer investment.
Innovation is by definition almost always ahead of the market, and must create its own market to some degree. This is tough enough for cool things like iPads and TiVos, but nearly impossible for something less sexy like security. I love my TiVo, but I only appreciate my firewall.
As an example, let’s take DLP. By bringing content analysis into the game, DLP became one of the most innovative, if not the most innovative, data security technologies we’ve seen. Yet 5+ years in, after multiple acquisitions by major vendors, we’re still only talking about a $150M market. Why? DLP didn’t keep your website up, didn’t keep the CEO browsing ESPN during March Madness, and didn’t keep email spam-free. It addresses a problem most people couldn’t see without DLP a DLP tool! Only when it started assisting with compliance (not that it was required) did the market start growing.
Another example? How many of you encrypted laptops before you had to start reporting lost laptops as a data breach?
On the vendor side, real innovation is a pain in the ass. It’s your pot of gold, but only after years of slogging it out (usually). Sometimes you get the timing right and experience a quick exit, but more often than not you either have to glom onto an existing market (where you’re fighting for your life against competitors that really shouldn’t be your competitors), or you find patient investors who will give you the years you need to build a new market. Everyone else dies.
- PureWire wasn’t the first to market (ScanSafe was) and didn’t get the biggest buyout (ScanSafe again), but they timed it right and were in and out before they had to slog.
- Fidelis is forced to compete in the DLP market, although the bulk of their value is in managing a different (but related) threat. 7+ years in and they are just now starting to break out of that bubble.
- Core Security has spent 7 years building a market- something only possible with patient investors.
- Rumor is Palo Alto has some serious firewall and IPS capabilities, but rather than battling Cisco/Checkpoint, they are creating an ancillary market (application control) and then working on the cross-sell.
Most of you don’t buy innovative security products. After paying off your maintenance and licens renewals, and picking up a few widgets to help with compliance, there isn’t a lot of budget left. You tend to only look for innovation when your existing tools are failing so badly that you can’t keep the business running.
That’s why it looks like there’s no security innovation – it’s simply ahead of market demand, and without a market it’s hard to survive. Unless we put together a charity fund or those academics get off their asses and work on something practical, we lack the necessary incubators to keep innovation alive until you’re ready to buy it.
So the question is… how can we inspire and sustain innovation when there’s no market for it? Or should we? When does innovation make sense? What innovation are we willing to spend on when there’s no market? When and how should we become early adopters?
Posted at Monday 22nd March 2010 6:00 pm
(17) Comments •
One of our readers, Jon Damratoski, is putting together a DLP program and asked me for some ideas on metrics to track the effectiveness of his deployment. By ‘ask’, I mean he sent me a great list of starting metrics that I completely failed to improve on.
Jon is looking for some feedback and suggestions, and agreed to let me post these. Here’s his list:
- Number of people/business groups contacted about incidents – tie in somehow with user awareness training.
- Remediation metrics to show trend results in reducing incidents – at start of DLP we had X events, after talking to people for 30 days about incidents we now have Y events.
- Trend analysis over 3, 6, & 9 month periods to show how the number of events has reduced as remediation efforts kick in.
- Reduction in the average severity of an event per user, business group, etc.
- Trend: number of broken business policies.
- Trend: number of incidents related to automated business practices (automated emails).
- Trend: number of incidents that generated automatic email.
- Trend: number of incidents that were generated from service accounts – (emails, batch files, etc.)
I thought this was a great start, and I’ve seen similar metrics on the dashboards of many of the DLP products.
The only one I have to add to Jon’s list is:
- Average number of incidents per user.
Anyone have other suggestions?
Posted at Monday 22nd March 2010 4:48 pm
(6) Comments •
By Mike Rothman
The lack of credible and relevant network security metrics has been a thorn in my side for years. We don’t know how to define success. We don’t know how to communicate value. And ultimately, we don’t even know what we should be tracking operationally to show improvement (or failure) in our network security activities.
But we in the echo chamber seem to be happier bitching about this, or flaming each other on mailing lists, than focusing on finding a solution. Some folks have tried to drive towards a set of metrics that make sense, but I can say most of the attempts are way too academic and also cost too much to collect to be usable in everyday practice. Not to mention that most of our daily activities aren’t even included in the models.
Not to pick on them too much, but I think these issues are highlighted in the way the Center for Internet Security has scoped out network security metrics. Basically, they didn’t. They have metrics on Incident Management, Vulnerability Management, Patch Management, Configuration Change Management, Application Security, and Financial Metrics. So the guy managing the network security devices doesn’t count? Again, I know CIS is working towards a lot of other stuff, but the reality is the majority of security spending is targeted at the network and endpoint domains, and there are no good metrics for those.
So let’s fix it.
Today, we are kicking off the next in our series of Quant projects. This one is called Network Security Operations Quant, and we aim to build a process map and underlying cost model for how organizations manage their network security devices.
The project’s formal objective and scope are:
The objective of Network Security Operations Quant is to develop a cost model for monitoring and managing network security devices that accurately reflects the associated financial and resource costs.
Secondarily, we also want to:
- Build the model in a manner that supports use as an operational efficiency model to help organizations optimize their network security monitoring and management processes, and compare costs of different options.
- Heavily engage the community and produce an open model with wide support and credibility, using the Totally Transparent Research process.
- Advance the state of IT metrics, particularly operational security metrics.
We are grateful to our friends at SecureWorks, who are funding this primary research effort.
As with all our quant processes, our methodology is:
- Establish the high level process map via our own research.
- Use a broad survey to validate and identify gaps in the process map.
- Define a set of subprocesses for each high-level process.
- Build metrics for each subprocess.
- Assemble the metrics into a model which can be used to track operational improvement.
From a scoping standpoint, we are going to deal with 5 different network security processes:
- Monitoring firewalls
- Monitoring IDS/IPS
- Monitoring server devices
- Managing firewalls
- Managing IDS/IPS
Yes, we know network security is bigger than just these 5 functions, but we can’t boil the ocean. There is a lot of other stuff we’ll model out using the Quant process over the next year, but this should be a good start.
Put up or shut up
We can’t do this alone. So we are asking for your help. First off, we are going to put together a “panel” of organizations to serve as the basis for our initial primary research. That means we’ll be either doing site visits or detailed phone interviews to understand how you undertake network security processes. We’ll also need the folks on the panel to shoot holes in our process maps before they are posted for public feedback. We are looking for about a dozen organizations from a number of different verticals and company sizes (large enterprise to mid-market).
As with all our research, there will be no direct attribution to your organization. We are happy to sign NDAs and the like. If you are interested in participating, please send me an email directly at mrothman (at) securosis . com.
Once the initial process maps are posted, we will post a survey to find out whether you actually do the steps we identify. We’ll also want your feedback on the process via posts that describe each step in the process. Everyone has an opportunity to participate and we hope you will take us up on it.
This is possibly the coolest research project I’ve personally been involved with and I’m really excited to get moving on it. We look forward to your participation, so we finally can get on the same page, and figure out how to measure how we “network security plumbers” do our business.
Posted at Monday 22nd March 2010 1:30 pm
(4) Comments •
By Mike Rothman
I got a little motivated this AM (it might have something to do with blowing off this afternoon to watch NCAA tourney games) and decided to double up on the Incite this week.
I read Adrian’s Friday Summary intro this and it kind of bothered me. Mostly because I don’t know the answers either, and I find questions that I can’t answer cause me stress and angst. Maybe it’s because I like to be a know-it-all and it sucks when your own limitations smack you upside the head.
Anyhow, what do we do about this whole information sharing culture we’ve created – and more importantly, how do we make sure the next generation is protected from the new age scam artists who prey on over-sharers? I came across this coverage from RSA of Hugh Thompson’s interviews of Craigslist and the Woz. Both Newmark and Wozniak believe education is the answer.
Truth be told, I have mixed feelings. I know the futility of widespread education because you can’t possibly keep up with the attackers, not within a mass market context. Yet my plan is still to use education as one of a few tactics that I’ll use to keep my kids (and the Boss) safe online.
The reality is that because my kids will be trained on how to recognize fraud and what not to do online, they will be ahead of 95% of the other folks out there. And remember, most attackers prey on the lowest hanging fruit. As long as my kids aren’t that, I think things will work out OK.
But I also maintain pretty tight controls on the machines they use and the network they connect to. As they get more sophisticated, so will the defenses. I’ll implement a kids’ browsing network, and segment out my business machines and sensitive data). I already lock down their devices so they can’t install software (unless I know about it). At some point, they’ll get their own machines and I’ll centralize the file storage (both for backup and oversight), so I can easily rebuild their machines every couple months.
And we’ve got a lot of controls to protect our finances as well. We check the credit cards frequently (to ensure unauthorized transactions get caught quickly) and have a home incident response plan in the event one of my devices does get pwned.
Of course, that doesn’t answer the question of how to solve the macro problem, but honestly I’m not sure we can. Fraud has been happening since the beginning of time, and it’s a bit crazy to think we could stop it entirely.
But I can work my ass off to minimize the impact of the bad guys on my own situation, which is a pretty good objective – both at home and at work.
Have a great weekend.
Photo credit: “that low-hanging fruit they keep talking about in meetings” originally uploaded by travelskerricks
Bonus Incite 4 U
Getting screwed by the back channel – I read a recent post from the security career counselors (Mike Murray and Lee Kushner) and it got my goat a bit. The post was about how to deal with negative references, and I’m sensitive to this. I’ve been in a situation where a former boss sent a torpedo through my engine room as I had a new job lined up and closed. It was during a back channel conversation so I had no recourse (even though there was a non-disparagement clause in my exit agreement). Mike and Lee suggest first assembling a list of positive references that can offset a negative reference, as well as being candid with your prospective employer about the issues. This is great advice, since that’s exactly how I dealt with the situation. I did my own backchannel work and got folks inside the company to talk about me (on deep background), as well as confronting the situation head on. It worked out for me, but everyone needs to have contingency plans for everything, and a negative reference is certainly one of them. – MR
Isn’t UTM a hopping market? – From all the market share projections and growth numbers, the UTM (unified threat management) market is growing like gangbusters. Yet you see companies like Symantec (a few years ago) and McAfee (who recently shut down their SnapGear offering) getting out of the business. The reality is there are multiple market segments in network security and they require different solutions. UTM can be applicable to large enterprises, but they don’t buy combined solutions. They evaluate the products on a function-by-function basis. So they will compare the UTM-based IPS to the stand-alone IPS and so on, before they decide whether to embrace an integrated solution. Whereas the mid-market wants a toaster to make their problems go away. So hats off to McAfee for deciding they didn’t have a competitive offering or leveraged path to market, and getting out of the business. One of the hardest things to do is kill a product, no matter how competitive it is. Strong companies need to kill things, or they become overpopulated and operate sub-optimally. – MR
Stupid is as stupid does – I recently watched Forrest Gump again, and it’s a treasure trove of little saying that really apply to our daily existence. We are security professionals, which mean we should understand risks and act accordingly. How can you tell your internal users to do something if you don’t do it yourself? I guess you can, but come back into the shop after having your own machine pwned and see how much credibility you have left. So when I see the inevitable reports from security conferences about how stupid our own professionals are, it makes me nuts. At the RSA show, Motorola AirDefense found all sorts of wireless stupidity from the attendees, and it’s really nutty. If you don’t have a 3G card, then just make due without connecting for a few hours while you are at the show. You have a mobile device and if it’s that important, go back to your hotel. At a security show they are always watching, even if not trying to put you on the wall of sheep. Get your head in the game, folks. – MR
Seeing the Hydra in action – We talk about the need for redundancy and contingency plans to keep our networks operational. Well, the bad guys do too. Krebs digs into some of the things the folks running the big botnets have done to keep operating even when one of their network connectivity points (Troyak) gets taken down. It’s fascinating stuff and just goes to show that our adversaries have well-thought-out business plans in place. Not sure you can put “Director of Network Resiliance, Zeus Botnet” on a business card, but I assure you someone has that title, somewhere. And be sure to put Brian Krebs on your holiday card list. The work he does is consistently outstanding. – MR
PCI a success? Why do we bitch about it so much? – CSOAndy makes a good point in covering the PCI panel that happened at Security BSides at RSA. To be clear, PCI has done more for security folks than any other standard to date. Hands down. The real issue is that we in the echo chamber know how much more needs to be done. But referring back to RSnake’s conversations with black hat hackers, we are making progress because the bad guys have to work harder. PCI is partially responsible for making sure people are closing the windows and locking the doors. Of course, there are still ways in, and any standard will always be behind the current attack space, but let’s take a step back and remember how things were a few years ago. Tick tock tick tock. OK, enough reflection. The PCI folks need to figure out how to reduce the cycle time of their updates, or at least put tiered guidance in place for folks where not hitting a low bar results in fines, where a set of advanced practices would more accurately address the current attack patterns. – MR
Posted at Friday 19th March 2010 3:00 pm
(1) Comments •
By Adrian Lane
Your Facebook account gets compromised. Your browser flags your favorite sports site as a malware distributor. Your Twitter account is hacked through a phishing scam. You get AV pop-ups on your machine, but cannot tell which are real and which are scareware. Your identify gets stolen. You try to repair the damage and make sure it doesn’t happen again, only to get ripped off by the credit agency (you know who I am talking about). Exasperated, you just want to go home, relax, and catch up on March Madness. But it turns out the bracket email from your friend was probably another phishing attempt, and your alma mater suspends a star player while it investigates derogatory public comments – which it eventually discovers were forged. Man, it sucks to be Generation Y.
There has been an incredible cacophony over the last couple weeks across the mainstream media about social networks being manipulated for fun, personal satisfaction, and profit. Even the people my my semi-rural area are discussing how it has affected them and their children, so I know it is getting national attention. What I can’t figure out is how their behavior will change – if at all. RSnake discussed a Microsoft paper recently, expanding on its discussion of why training users on the dangers of unsafe browsing often does not make economic sense. Even if it was viable, people don’t want to learn all that stuff, as it makes web browsing more work than fun.
So what gives? I believe that our increasing use of and dependency on the Internet, and the corresponding increases in fraud and misuse, require change. But will people feel differently, and will this drive them to actually behave differently? Will the changes be technological, legal, or social? We could see tighter or looser privacy rules on websites, or legal precedents or new laws – we have already seen dramatic shifts in what younger people consider private and are willing to publicize online. The paper asserts that “The wisdom of the crowd discerns that ignoring some threats brings little actual harm …” which I totally agree with, and describes Twitter phishing and Facebook hacks. Bank accounts being drained and cars being shut down are a whole different level of problem, though. I really don’t have an answer – or even an inkling – of what happens next. I do think the problem has gotten sufficiently mainstream that we will to see mainstream impacts and reactions, though. Interesting times!
On to the Summary:
Webcasts, Podcasts, Outside Writing, and Conferences
Favorite Securosis Posts
Other Securosis Posts
Favorite Outside Posts
- Rich: Conversations With a Blackhat. The best takeaway from RSnake’s summary of talking with some bad guys is that at least some of what we are doing on the security side is actually working. So much for the “security is failing” meme…
- David Mortman: Three Steps to a Rational Security Budget.
- Mike Rothman: Why I’m Skeptical of “Due Diligence” Based Security. I have no idea what Alex is talking about, but he has a picture of Anakin, Obi-Wan and Yoda with the glowing ghosts of John Lennon and George Harrison. So it’s my favorite of the week.
- Adrian Lane: Walkthrough: Click at Your Own Risk. Analysis of privacy and the manipulation of public impressions through social media. An excellent piece of analysis from … a football statistics site. Long but very informative, and a perspective I don’t think a lot of people appreciate.
Project Quant Posts
Top News and Posts
Blog Comment of the Week
Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Andy Jaquith, in response to RSA Tomfoolery: APT is the Fastest Way to Identify Fools and Liars. When a comments makes me laugh out loud, it usually gets my vote!
I’ve been using the phrase “Advanced Persistent Chinese” lately. It sounds good, it’s more accurate, and it’s funny. What’s not to like?
I completely agree that the displays of vendor idiocy around APT are far too widespread. You can’t have a carnival without the barker, apparently.
Good seeing you, by the way, Any – albeit far too briefly.
Posted at Friday 19th March 2010 4:46 am
(0) Comments •
By Mike Rothman
As we wrap up our initial wave of Network Security Fundamentals, we’ve already discussed Default Deny, Monitoring everything, Correlation, and Looking for Not Normal. Now it’s time to see if we can actually get in the way of some of these nasty attacks.
So what are we trying to block? Basically a lot of the issues we find through looking for not normal. The general idea involves implementing a positive security model not just to inbound traffic (default deny), but to outbound traffic as well. This is called egress filtering, and in practice is basically turning your perimeter device inside out and applying policies to outbound traffic.
This defensive tactic ensures that non-standard ports and protocols don’t make their way out of your network. Filtering can also block reconnaissance tactics, network enumeration techniques, outbound spam bots, and those pesky employees running Internet businesses from within your corporate network. Amazingly enough this still happens, and too many organizations are none the wiser.
Defining Egress Filtering Policies
Your best bet is to start with recent incidents and their root causes. Define the outbound ports and protocols which allowed the data to be exfiltrated from your network. Yes, this is obvious, but it’s a start and you don’t want to block everything. Not unless you enjoy being ritually flayed by your users.
Next leverage the initial steps in the Fundamentals series and analyze correlated data to determine what is normal. Armed with this information, next turn to the recent high-profile attacks getting a lot of airtime. Think Aurora and learn how that attack exfiltrates data (custom encrypted protocol on ports 443). For such higher-probability attacks, define another set of egress filtering rules to make sure you block (or at least are notified) when you have outbound traffic on the ports used during the attacks.
You can also use tighter location-based filtering policies, like not allowing traffic to countries where you don’t do business. This won’t work for mega-corporations doing business in every country in the world, but for the other 99.99% of you, it’s an option. Or you could enforcing RFC standards on Port 80 and 443 to make sure no custom protocol is hiding anything in a standard HTTP stream.
Again, there are lots of different ways to set up your egress filtering rules. Most can help, depending on the nature of your network traffic, none are a panacea. Whichever you decide to implement, make sure you are testing the rules in non-blocking mode first to make sure nothing breaks.
Blocking or Alerting
As you can imagine, it’s a dicey proposition to start blocking traffic that may break legitimate applications. So take care when defining these rules, or take the easy way out and just send alerts when one of your egress policies is violated. Of course, the alerting approach can (and probably will) result in plenty of false positives, but as you tune the policies, you’ll be able to minimize that.
Which brings up the hard truth of playing around with these policies. There are no short cuts. Vendors who talk about self-defending anything, or learning systems, or anything else that doesn’t involve the brutal work of defining policies and tuning them over time until they work in your environment, basically doesn’t spend enough time in the real world. ‘nuff said.
To finish our discussion of blocking, again think about these rules in terms of your IPS. You block the stuff you know is bad, and you alert on the stuff you aren’t sure about. Let’s hope you aren’t so buried under alerts that something important gets by, but that’s life in the big city.
No Magic Bullets
Yes, we believe egress filtering is a key control in your security arsenal, but as with everything else, it’s not a panacea. There are lots of attacks which will skate by undetected, including those that send traffic over standard ports. So once again, it’s important to look at other controls to provide additional layers of defense. These may include outbound content filtering, application-aware perimeter devices, deep packet inspection, and others.
More Network Security Fundamentals
I’m going to switch gears a bit and start documenting Endpoint Security Fundamentals next week, but be back to networks soon enough, getting into wireless security, network pen testing, perimeter change control, and outsourced perimeter monitoring. Stay tuned.
Posted at Thursday 18th March 2010 4:03 pm
(0) Comments •
In the last two posts we covered the main preparation you need to get quick wins with your DLP deployment. First you need to put a basic enforcement process in place, then you need to integrate with your directory servers and major infrastructure. With these two bits out of the way, it’s time to roll up our sleeves, get to work, and start putting that shiny new appliance or server to use.
The differences between a long-term DLP deployment and our “Quick Wins” approach are goals and scope. With a traditional deployment we focus on comprehensive monitoring and protection of very specific data types. We know what we want to protect (at a granular level) and how we want to protect it, and we can focus on comprehensive policies with low false positives and a robust workflow. Every policy violation is reviewed to determine if it’s an incident that requires a response.
In the Quick Wins approach we are concerned less about incident management, and more about gaining a rapid understanding of how information is used within our organization. There are two flavors to this approach – one where we focus on a narrow data type, typically as an early step in a full enforcement process or to support a compliance need, and the other where we cast a wide net to help us understand general data usage to prioritize our efforts. Long-term deployments and Quick Wins are not mutually exclusive – each targets a different goal and both can run concurrently or sequentially, depending on your resources.
Remember: even though we aren’t talking about a full enforcement process, it is absolutely essential that your incident management workflow be ready to go when you encounter violations that demand immediate action!
Choose Your Flavor
The first step is to decide which of two general approaches to take:
- Single Type: In some organizations the primary driver behind the DLP deployment is protection of a single data type, often due to compliance requirements. This approach focuses only on that data type.
- Information Usage: This approach casts a wide net to help characterize how the organization uses information, and identify patterns of both legitimate use and abuse. This information is often very useful for prioritizing and informing additional data security efforts.
Choose Your Deployment Type
Depending on your DLP tool, it will be capable of monitoring and protecting information on the network, on endpoints, or in storage repositories – or some combination of these. This gives us three pure deployment options and four possible combinations.
- Network Focused: Deploying DLP on the network in monitoring mode provides the broadest coverage with the least effort. Network monitoring is typically the fastest to get up and running due to lighter integration requirements. You can often plug in a server or appliance over a few hours or less, and instantly start evaluating results.
- Endpoint Focused: Starting with endpoints should give you a good idea of which employees are storing data locally or transferring it to portable storage. Some endpoint tools can also monitor network activity on the endpoint, but these capabilities vary widely. In terms of Quick Wins, endpoint deployments are generally focused on analyzing stored content on the endpoints.
- Storage Focused: Content discovery is the analysis of data at rest in storage repositories. Since it often requires considerable integration (at minimum, knowing the username and password to access a file share), these deployments, like endpoints, involve more effort. That said, it’s scan major repositories is very useful, and in some organizations it’s as important (or even more so) to understand stored data than to monitor information moving across the network.
Network deployments typically provide the most immediate information with the lowest effort, but depending on what tools you have available and your organization’s priorities, it may make sense to start with endpoints or storage. Combinations are obviously possible, but we suggest you roll out multiple deployment types sequentially rather than in parallel to manage project scope.
Define Your Policies
The last step before hitting the “on” switch is to configure your policies to match your deployment flavor.
In a single type deployment, either choose an existing category that matches the data type in your tool, or quickly build your own policy. In our experience, pre-built categories common in most DLP tools are almost always available for the data types that commonly drive a DLP project. Don’t worry about tuning the policy – right now we just want to toss it out there and get as many results as possible. Yes, this is the exact opposite of our recommendations for a traditional, focused DLP deployment.
In an information usage deployment, turn on all the policies or enable promiscuous monitoring mode. Most DLP tools only record activity when there are policy violations, which is why you must enable the policies. A few tools can monitor general activity without relying on a policy trigger (either full content or metadata only). In both cases our goal is to collect as much information as possible to identify usage patterns and potential issues.
Now it’s time to turn on your tool and start collecting results.
Don’t be shocked – in both deployment types you will see a lot more information than in a focused deployment, including more potential false positives. Remember, you aren’t concerned with managing every single incident, but want a broad understanding of what’s going on on your network, in endpoints, or in storage.
Analyze and PROFIT!
Now we get to the most important part of the process – turning all that data into useful information.
Once we collect enough data, it’s time to start the analysis process. Our goal is to identify broad patterns and identify any major issues. Here are some examples of what to look for:
- A business unit sending out sensitive data unprotected as part of a regularly scheduled job.
- Which data types broadly trigger the most violations.
- The volume of usage of certain content or files, which may help identify valuable assets that don’t cleanly match a pre-defined policy.
- Particular users or business units with higher numbers of violations or unusual usage patterns.
- False positive patterns, for tuning long-term policies later.
All DLP tools provide some level of reporting and analysis, but ideally your tool will allow you to set flexible criteria to support the analysis.
What Did We Achieve?
If you followed this process, by now you’ve created a base for your ongoing DLP usage while achieving valuable short-term goals. In a short amount of time you have:
- Established a flexible incident management process.
- Integrated with major infrastructure components.
- Assessed broad information usage.
- Set a foundation for later focused efforts and policy tuning to support long-term management.
Thus by following the Quick Wins process you can show immediate results while establishing the foundations of your program, all without overwhelming yourself by forcing unprepared action on all possible alerts before you understand information usage patterns.
Not bad, eh?
Posted at Thursday 18th March 2010 12:02 am
(0) Comments •
I’m about to commit the single most egotistical act of my blogging/analyst career. I’m going to make up my own law and name it after myself. Hopefully I’m almost as smart as everyone says I think I am.
I’ve been talking a lot, and writing a bit, about the intersection of security and psychology in security. One example is my post on the anonymization of losses, and another is the one on noisy vs. quiet security threats.
Today I read a post by RSnake on the effectiveness of user training and security products, which was inspired by a great paper from Microsoft: So Long, And No Thanks for the Externalities: The Rational Rejection of Security Advice by Users.
I think we can combine these thoughts into a simple ‘law’:
The rate of user compliance with a security control is directly proportional to the pain of the control vs. the pain of non-compliance.
We need some supporting definitions:
- Rate of compliance equals the probability the user will follow a required security control, as opposed to ignoring or actively circumventing said control.
- The pain of the control is the time added to an established process, and/or the time to learn and implement a new process.
- The pain of non-compliance includes the consequences (financial, professional, or personal) and the probability of experiencing said consequences. Consequences exist on a spectrum – with financial as the most impactful, and social as the least.
- The pain of non-compliance must be tied to the security control so the user understands the cause/effect relationship.
I could write it out as an equation, but then we’d all make up magical numbers instead of understanding the implications.
Psychology tells us people only care about things which personally affect them, and fuzzy principles like “the good of the company” are low on the importance scale. Also that immediate risks hold our attention far more than long-term risks; and we rapidly de-prioritize both high-impact low-frequency events, and high-frequency low-impact events. Economics teaches us how to evaluate these factors and use external influences to guide widescale behavior.
Here’s an example:
Currently most security incidents are managed out of a central response budget, as opposed to business units paying the response costs. Economics tells us that we can likely increase the rate of compliance with security initiatives if business units have to pay for response costs they incur, thus forcing them to directly experience the pain of a security incident.
I suspect this is one of those posts that’s going to be edited and updated a bunch based on feedback…
Posted at Wednesday 17th March 2010 8:25 pm
(7) Comments •
By Mike Rothman
“WE HAVE MET THE ENEMY AND HE IS US.” POGO (1970)
I’ve worked for companies where we had to spend so much time fighting each other, the market got away. I’ve also worked at companies where internal debate and strife made the organization stronger and the product better. But there are no pure absolutes – as much as I try to be binary, most companies include both sides of the coin.
But when I read of the termination of Pennsylvania’s CISO because he dared to actually talk about a breach, it made me wonder – about everything. Dennis hit the nail on the head, this is bad for all of us. Can we be successful? We all suffer from a vacuum of information. That was the premise of Adam Shostack and Andrew Stewart’s book The New School of Information Security. That we need to share information, both good and bad, flattering and unflattering – to make us better at protecting stuff.
Data can help. Unfortunately most of the world thinks that security through obscurity is the way to go. As Adrian pointed out in Monday’s FireStarter, there isn’t much incentive to disclose anything unless an organization must – by law. The power of negative PR grossly outweighs the security benefit of information sharing. Which is a shame.
So what do you do? Give up? Well, actually maybe you do give up. Not on security in general, but on your organization. Every day you need to figure out if you can overcome the enemy within your four walls. If you can’t, then move on. I know, now is the wrong time to leave a job. I get that. But how long can you go in every day and get kicked in the teeth? Only you can decide that. But if your organization is a mess, don’t wait for it to get better.
If you do decide to stay, you need to discover the power of the peer group. Your organization will not sanction it, and don’t blame me, but find a local or industry group of peeps where you can share your dirt. You take a blood oath (just like in grade school) that what is spoken about in the group stays within the group and you spill the beans. You learn from what your peers have done, and they learn from you.
At this point we must acknowledge that widespread information sharing is not going to happen. Which sucks, but it is what it is. So we need to get creative and figure out an alternative means to get the job done. Find your peeps and learn from them.
Photo credit: “Pogo – Walt Kelly (1951) – front cover” originally uploaded by apophysis_rocks
Incite 4 U
Time to study marketing too… – RSnake is starting to mingle with some shady characters. Well, maybe not shady, but certainly on the wrong side of the rule of law. One of his conclusions is that it’s getting harder for the bad guys to do their work, at least the work of compromising meaty valuable targets. That’s a good thing. But the black hats are innovative and playing for real money, so they will figure something out and their models will evolve to continue generating profits. It’s the way of the capitalist. This idea of assigning a much higher value to a zombie within the network of a target makes perfect sense. It’s no different than how marketing firms charge a lot more for leads directly within the target market. So it’s probably not a bad idea for us security folks to study a bit of marketing, which will tell us how the bad guys will evolve their tactics. – MR
Lies, Damn Lies, and Exploits – We’ve all been hearing a ton about that new “Aurora” exploit (mostly because of all the idiots who think it’s the same thing as APT), but NSS Labs took a pretty darn interesting approach to all the hype. Assuming that every anti-malware vendor on the market would block the known Aurora exploit, they went ahead and tested the major consumer AV products against fully functional variants. NSS varied both the exploit and the payload to see which tools would still block the attack. The results are uglier than a hairless cat with a furball problem. Only one vendor (McAfee) protected against all the variants, and some (read the report yourself) couldn’t handle even the most minor changes. NSS is working on a test of the enterprise versions, but I love when someone ignites the snake oil. – RM
I hate C-I-A – Confidentiality, Integrity, and Availability is what it stands for. I was reminded of this reading this CIA Triad Post earlier today. Every person studying for their CISSP is taught that this is how they need to think about security. I always felt this was BS, along with a lot of other stuff they teach in CISSP classes, but that’s another topic. CIA just fails to capture the essence of security. Yeah, I have to admit that CIA represents three handy buckets that can compartmentalize security events, but they so missed the point about how one should approach security that I have become repulsed by the concept. Seriously, we need something better. Something like MSB. Misuse-Spoof-Break. Do something totally unintended, do something normal pretending to be someone else, or change something. Isn’t that a better way to think about security threats? It’s the “What can we screw with next?” triad. And push “denial of service” to the back of your mind. Script kiddies used to think it was fun, and some governments still do, but when it comes to hacking, it’s nothing more than a socially awkward cousin of the other three. – AL
Signatures in burglar alarm clothing – Pauldotcom, writing with his Tenable hat on, explains a method he calls “burglar alarms,” as a way to deflate some APT hype. This method ostensibly provides a heads-up on attacks we haven’t seen before. He uses this as yet another example of how to detect an APT. I know I’m not the sharpest tool in the shed, but I don’t see how identifying a set of events that should not happen, and looking for signs of their occurrence is any different than the traditional black list model used by our favorite security punching bags – IDS and AV. The list of things that should not happen is infinite. Literally. Yes, you use common sense and model the most likely things that shouldn’t happen, but in the end the list is too long and unwieldy, especially given today’s complex technology stack. Even better is his close: The way to catch the APTs is to meet them with unexpected defenses that they’ve never heard of before. I’m just wondering if I can buy the unexpected defense plug-in for Nessus on Tenable’s website. – MR
To tell the web filtering truth – You’ve got to applaud Bruce Green, COO for M86, for coming out and telling the truth: Internet filtering won’t prevent people deliberately looking for inappropriate material from accessing blocked content. Several British ISP’s are deploying content filtering on a massive scale to block ‘inappropriate material’ – obviously a euphemism for pr0n. M86, for those not aware, is the content security trifecta of 8e6, Marshal and Finjan, with a sprinkle of Avinti on top. They have a long track record of web content filtering in the education space. The Internet filtering trial was based upon M86’s technology and, like all filtering technologies, works exceptionally well under controlled environments when you do not take steps to avoid or conceal activity from the filters. But to Mr. Green’s point, those who are serious about their Internet ‘inappropriate material’ have dozens of ways to get around this type of filtering. What seems misleading about the study is to claim that they were “100% effective” in the ability to identify ‘inappropriate material’. But catching what you were expecting is unimpressive. As I understand the trial, they were not blocking, only identifying signatures. This means no one has had any reason to defeat the filters, because there was no need. At least M86 has no illusions about 100% success when they roll this out, and if nothing else they are going to get fantastic data on how to avoid Internet filtering. – AL
Leverage makes the rational security budget … more rational – Combine security skills, secure coding evangelism, a general disdain of most puffery, and a large dose of value economics, and you basically get Gunnar in a nutshell. He really nails it with this post about putting together a rational security budget. I suggest a similar model in the Pragmatic CSO, but the one thing Gunnar doesn’t factor in here (maybe because it’s a post and not a book) is the concept of leverage. I love the idea of thinking about security spend relative to IT spend, but the reality is a lot of the controls you’d need for each project can be used by the others. Thus leverage – pay once, and use across many. Remember, we have to work smarter since we aren’t getting more people or funding any time soon. So make sure leverage is your friend. – MR
Vapor Audits – I’ve been spending a lot more time lately focusing on cloud computing; partially because I think it’s so transformative that we are fools if we think it’s nothing new, and partially because it is a major driver for information-centric security. Even though we are still on the earliest fringes, cloud computing changes important security paradigms and methods of practice. Running a server in Amazon EC2? Want to hit it with a vulnerability scan? Oops – that’s against the terms of service. Okay, how about auditing which administrators touched your virtual server instance? Umm… not a supported feature. Audit, assessment, and assurance are major inhibitors to secure cloud computing adoption, which is why we all need to pay attention to the CloudAudit/A6 (Automated Audit, Assertion, Assessment, and Assurance API) group founded by Chris Hoff. if you care about cloud computing, you need to monitor or participate in this work. – RM
Learning HaXor skillz – Most of us are not l33t haXors, we are just trying to get through the day. The good news is there are lots of folks who have kung fu, and are willing to teach you what they know. The latest I stumbled upon is Mubix. He’s got a new site called Practical Exploitation, where the plan is to post some videos and other materials to teach the trade. Thus far there are two videos posted, one on leveraging msfconsole and the other on comparing a few tools for DNS enumeration. Good stuff here and bravo to Mubix. We need more resources like this. Hmmm, this could be a job for SecurosisTV… – MR
Posted at Wednesday 17th March 2010 7:00 am
(2) Comments •
By Adrian Lane
I ran into Slavik Markovich of Sentrigo, and David Maman of GreenSQL, on the vendor floor at the RSA Conference. I probably startled them with my negative demeanor – having just come from one vendor who seems to deliberately misunderstand preventative and detective controls, and another who thinks regular expression checks for content analysis are cutting edge. Still, we got to chat for a few minutes before rushing off to another product briefing. During that conversation it dawned on me that we continue to see refinement in the detection of malicious database queries and deployment methods to block database activity by database activity monitoring vendors. Not just from these vendors – others are improving as well.
For me, the interesting aspect is the detection methods being used – particularly how incoming SQL statements are analyzed. For blocking to be viable, the detection algorithms have to be precise, with a low rate of false positives (where have you heard that before?). While there are conceptual similarities between database blocking and traditional firewalls or WAF, the side effects of blocking are more severe and difficult to diagnose. That means people are far less tolerant of screw-ups because they are more costly, but the need to detect suspicious activity remains strong. Let’s take a look at some of the analytics being used today:
- Some tools block specific statements. For example, there is no need to monitor a ‘create view’ command coming from the web server. But blocking administrative use and alerting when remote administrative commands come into the database is useful for detection of problems.
- Some tools use metadata & attribute-based profiles. For example, I worked on a project once to protect student grades in a university database, and kill the connection if someone tried to alter the database contents between 6pm and 6am for an unapproved terminal. User, time of day, source application, affected data, location, and IP address are all attributes that can be checked to enforce authorized usage.
- Some tools use parameter signatures. The classic example is “1=1”, but there are many other common signatures for SQL injection, buffer overflow, and permission escalation attacks.
- Some tools use lexical analysis. This is one of the more interesting approaches to come along in the last couple of years. By examining the use of the SQL language, and the various structural options available with individual statements, we can detect anomalies. For example, there are many different options for the create table command on Oracle, but certain combinations of delimiters or symbols can indicate an attempt to confuse the statement parser or inject code. In essence you define the subset of the query language you will allow, along with suspicious variations.
- Some tools use behavior. For example, while any one query may have been appropriate, a series of specific queries indicates an attack. Or a specific database reference such as a user account lookup may be permissible, but attempting to select all customer accounts might not be. In some cases this means profiling typical user behavior, using statistical analysis to quantify unusual behavior, and blocking anything ‘odd’.
- Some tools use content signatures. For example, looking at the content of the variables or blobs being inserted into the database for PII, malware, or other types of anomalous content.
All these analytical options work really well for one or two particular checks, but stink for other comparisions. No single method is best, so having multiple options allows choosing the best method to support each policy.
Most of the monitoring solutions that employ blocking will be deployed similarly to a web application firewall: as a stand-alone proxy service in front of the database, an embedded proxy service that is installed on the database platform, or as an out-of-band monitor that kills suspicious database sessions. And all of them can be deployed to monitor or block. While the number of companies that use database activity blocking is miniscule, I expect this to grow as people gradually gain confidence with the tools in monitoring mode.
Some vendors employ two detection models, but it’s still pretty early, so I expect we will see multiple options provided in the same way that Data Loss Prevention (DLP) products do. What really surprises me is that the database vendors have not snapped up a couple of these smaller firms and incorporated their technologies directly into the databases. This would ease deployment, either as an option for the networking subsystem, or even as part of the SQL pre-processor. Given that a single database installation may support multiple internal and external web applications, it’s very dangerous to rely on applications to defend against SQL injection, or to place to much faith in the appropriateness of administrative commands reaching the database engine. ACLs are particularly suspect in virtualized and cloud environments.
Posted at Tuesday 16th March 2010 10:08 pm
(0) Comments •
In Part 1 of this series on Low Hanging Fruit: Quick Wins with DLP, we covered how important it is to get your process in place, and the two kinds of violations you should be immediately prepared to handle. Trust us – you will see violations once you turn your DLP tool on.
Today we’ll talk about the last two pieces of prep work before you actually flip the ‘on’ switch.
Prepare Your Directory Servers
One of the single most consistent problems with DLP deployments has nothing to do with DLP, and everything to do with the supporting directory (AD, LDAP, or whatever) infrastructure. Since with DLP we are concerned with user actions across networks, files, and systems (and on the network with multiple protocols), it’s important to know exactly who is committing all these violations. With a file or email it’s usually a straightforward process to identify the user based on their mail or network logon ID, but once you start monitoring anything else, such as web traffic, you need to correlate the user’s network (IP) address back to their name.
This is built into nearly every DLP tool, so they can track what network addresses are assigned to users when they log onto the network or a service.
The more difficult problem tends to be the business process; correlating these technical IDs back to real human beings. Many organizations fail to keep their directory servers current, and as a result it can be hard to find the physical body behind a login. It gets even harder if you need to figure out their business unit, manager, and so on.
For a quick win, we suggest you focus predominantly on making sure you can track most users back to their real-world identities. Ideally your directory will also include role information so you can filter DLP policies violations based on business unit. Someone in HR or Legal usually has authorization for different sensitive information than people in IT and Customer Service, and if you have to manually figure all this out when a violation occurs, it will really hurt your efficiency later.
Integrate with Your Infrastructure
The last bit of preparation is to integrate with the important parts of your infrastructure. How you do this will vary a bit depending on your initial focus (endpoint, network, or discovery). Remember, this all comes after you integrate with your directory servers.
The easiest deployments are typically on the network side, since you can run in monitoring mode without having to do too much integration. This might not be your top priority, but adding what’s essentially an out of band network sniffer is very straightforward. Most organizations connect their DLP monitor to their network gateway using a SPAN or mirror port. If you have multiple locations, you’ll probably need multiple DLP boxes and have to integrate them using the built-in multi-system management features common to most DLP tools.
Most organizations also integrate a bit more directly with email, since it is particularly effective without being especially difficult. The store-and-forward nature of email, compared to other real-time protocols, makes many types of analysis and blocking easier. Many DLP tools include an embedded mail server (MTA, or Mail Transport Agent) which you can simply add as another hop in the email chain, just like you probably deployed your spam filter.
Endpoint rollouts are a little tougher because you must deploy an agent onto every monitored system. The best way to do this (after testing) is to use whatever software deployment tool you currently use to push out updates and new software.
Content discovery – scanning data at rest in storage – can be a bit tougher, depending on how many servers you need to scan and who manages them. For quick wins, look for centralized storage where you can start scanning remotely through a file share, as opposed to widely distributed systems where you have to manually obtain access or install an agent. This reduces the political overhead and you only need an authorized user account for the file share to start the process.
You’ll notice we haven’t talked about all the possible DLP integration points, but instead focused on the main ones to get you up and running as quickly as possible. To recap:
- For all deployments: Directory services (usually your Active Directory and DHCP servers).
- For network deployments: Network gateways and mail servers.
- For endpoint deployments: Software distribution tools.
- For discovery/storage deployments: File shares on the key storage repositories (you generally only need a username/password pair to connect).
Now that we are done with all the prep work, in our next post we’ll dig in and focus on what to do when you actually turn DLP on.
Posted at Monday 15th March 2010 10:44 pm
(0) Comments •
By Adrian Lane
On Monday March 1st, the Experienced Security Professionals Program (ESPP) was held at the RSA conference, gathering 100+ practitioners to discuss and debate a few topics. The morning session was on “The Changing Face of Cyber-crime”, and discussed the challenges facing law enforcement to prosecute electronic crimes, as well as some of the damage companies face when attackers steal data. As could be expected, the issue of breach disclosure came up, and of course several corporate representatives pulled out the tired argument of “protecting their company” as their reason to not disclose breaches. The FBI and US Department of Justice representatives on the panel referenced several examples where public firms have gone so far as to file an injunction against the FBI and other federal entities to stop investigating breaches. Yes, you read that correctly. Companies sued to stop the FBI from investigating.
And we wonder why cyber-attacks continue? It’s hard enough to catch these folks when all relevant data is available, so if you have victims intentionally stopping investigations and burying the evidence needed for prosecution, that seems like a pretty good way to ensure criminals will avoid any penalties, and to encourage attackers to continue their profitable pursuits at shareholder expense. The path of least resistance continues to get easier.
Let’s look past the murky grey area of breach disclosure regarding private information (PII) for a moment, and just focus on the theft of intellectual property. If anything, there is much less disclosure of IP theft, thanks to BS arguments like – “It will hurt the stock price,” or “We have to protect the shareholders.” or “Our responsibility is to preserve shareholder value.” Those were the exact phrases I heard at the ESPP event, and they made my blood boil. All these statements are complete cop-outs, motivated by corporate officers’ wish to avoid embarrassment and potential losses of their bonuses, as opposed to making sure shareholders have full and complete information on which to base investment decisions.
How does this impact stock price? If IP has been stolen and is being used by competitors, it’s reasonable to expect the company’s performance in the market will deteriorate over time. R&D advances come at significant costs and risks, and if that value is compromised, the shareholders eventually lose. Maybe it’s just me, but that seems like material information, and thus needs to be disclosed. In fact, not disclosing this material information to shareholders and providing sufficient information to understand investment risks runs counter to the fiscal responsibility corporate officers accept in exchange for their 7-figure paychecks. Many, like the SEC and members of Congress, argue that this is exactly the kind of information that is covered by the disclosure controls under Section 302 of Sarbanes-Oxley, which require companies to disclose risks to the business.
That said, I understand public companies will not disclose breaches of IP. It’s not going to happen. Despite my strong personal feelings that breach notification is essential to the overall integrity of global financial markets, companies will act in their own best interests over the short term. Looking beyond the embarrassment factor, potential brand impact, and competitive disadvantages, the single question that foils my idealistic goal of full disclosure is: “How does the company benefit from disclosure?”
That’s right – it’s not in the company own interest to disclose, and unless they can realize some benefit greater than the estimated loss of IP (Google’s Chinese PR stunt, anyone?), they will not disclose. Public companies need to act according to their own best interests. It’s not noble – in fact it’s entirely selfish – but it’s a fact. Unless there are potential regulatory losses due to not disclosing, since the company will already suffer the losses due to the lost IP, there is no upside to disclosing and disclosure probably only increases the losses. So we are at an impasse between what is right and what is realistic. So how to do we fix this? More legislation? A parade down Wall Street for those admitting IP theft? Financial incentives? Help a brother out here – how can we get IP breach disclosure, and get it now?
Posted at Monday 15th March 2010 2:09 pm
(4) Comments •
I love the week after RSA. Instead of being stressed to the point of cracking I’m basking in the glow of that euphoria you only experience after passing a major milestone in life.
Well, it lasted almost a full week – until I made the mistake of looking at my multi-page to-do list.
RSA went extremely well this year, and I think most of our pre-show predictions were on the money. Not that they were overly risky, but we got great feedback on the Securosis Guide to RSA 2010, and plan to repeat it next year. The Disaster Recovery Breakfast also went extremely well, with solid numbers and great conversation (thanks to Threatpost for co-sponsoring).
Now it’s back to business, and we need your help. We are currently running a couple concurrent research projects that could use your input.
For the first one, we are looking at the new dynamics of the endpoint protection/antivirus market. If you are interested in helping out, we are seeking for customer references to talk about how your deployments are going. A big focus is on the second-layer players like Sophos, Kaspersky, and ESET; but we also want to talk to a few people with Symantec, McAfee, and Trend.
We are also looking into application and database encryption solutions – if you are on NuBridges, Thales, Voltage, SafeNet, RSA, etc… and using them for application or database encryption support, please drop us a line.
Although we talk to a lot of you when you have questions or problems, you don’t tend to call us when things are running well. Most of the vendors supply us with some clients, but it’s important to balance them out with more independent references.
If you are up for a chat or an email interview, please let us know at email@example.com or one of our personal emails. All interviews are on deep background and never revealed to the outside world. Unless Jack Bauer or Chuck Norris shows up. We have exemptions for them in all our NDAs.
Er… I suppose I should get to this week’s summary now…
But only after we congratulate David Mortman and his wife on the birth of Jesse Jay Campbell-Mortman!
Webcasts, Podcasts, Outside Writing, and Conferences
Favorite Securosis Posts
Other Securosis Posts
Favorite Outside Posts
Project Quant Posts
Research Reports and Presentations
Top News and Posts
Blog Comment of the Week
Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Garry, in response to RSA Tomfoolery: APT is the Fastest Way to Identify Fools and Liars.
APT = China, and we (people who have serious jobs) can’t say bad things about China.
That pretty much covers it, yes?
Posted at Friday 12th March 2010 4:15 am
(2) Comments •
Two of the most common criticisms of DLP that comes up in user discussions are a) its complexity and b) the fear of false positives. Security professionals worry that DLP is an expensive widget that will fail to deliver the expected value – turning into yet another black hole of productivity. But when used properly DLP provides rapid assessment and identification of data security issues not available with any other technology.
I don’t mean to play down the real complexities you might encounter as you roll out a complete data protection program. Business use of information is itself complicated, and no tool designed to protect that data can simplify or mask the underlying business processes. However, there are steps you can take to obtain significant immediate value and security gains without blowing your productivity or wasting important resources.
Over the next few posts I’ll highlight the lowest hanging fruit for DLP, refined in conversations with hundreds of DLP users. These aren’t meant to incorporate the entire DLP process, but to show you how to get real and immediate wins before you move on to more complex policies and use cases.
Establish Your Process
Nearly every DLP reference I’ve talked with has discovered actionable offenses committed by employees as soon as they turn the tool on. Some of these require little more than contacting a business unit to change a bad process, but quite a few result in security guards escorting people out of the building, or even legal action. One of my favorite stories is the time the DLP vendor plugged in the tool for a lunchtime demonstration on the same day a senior executive decided to send proprietary information to a competitor. Needless to say, the vendor lost their hard drives that day, but they didn’t seem too unhappy.
Even if you aren’t planning on moving straight to enforcement mode, you need to put a process in place to manage the issues that will crop up once you activate your tool. The kinds of issues you need to figure out how to address in advance fall into two categories:
- Business Process Failures: Although you’ll likely manage most business process issues as you roll out your sustained deployment, the odds are high some will be of such high concern they will require immediate remediation. These are often compliance related.
- Egregious Employee Violations: Most employee-related issues can be dealt with as you gradually shift into enforcement mode, but as in the example above, you will encounter situations requiring immediate action.
In terms of process, I suggest two tracks based on the nature of the incident. Business process failures usually involve escalation within security or IT, possible involvement of compliance or risk management, and engagement with the business unity itself. You are less concerned with getting someone in trouble than stopping the problem.
Employee violations, due to their legal sensitivity, require a more formal process. Typically you’ll need to open an investigation and immediately escalate to management while engaging legal and human resources (since this might be a firing offense). Contingencies need to be established in case law enforcement is engaged, including plans to provide forensic evidence to law enforcement without having them walk out the door with your nice new DLP box and hard drives. Essentially you want to implement whatever process you already have in place for internal employee investigations and potential termination.
In our next post we’ll focus more on rolling out the tool, followed by how to configure it for those quick wins I keep teasing you with.
Posted at Thursday 11th March 2010 9:49 pm
(0) Comments •