Securosis

Research

ESF: Triage: Fixing the Leaky Buckets

As we discussed in the last ESF post on prioritizing the most significant risks, the next step is to build, communicate, and execute on a triage plan to fix those leaky buckets. The plan consists of the following sections: Risk Confirmation, Remediation Plan, Quick Wins, and Communication Risk Confirmation Coming out of the prioritize step, before we start committing resources and/or pulling the fire alarm, let’s take a deep breath and make sure our ranked list really represents the biggest risks. How do we do that? Basically by using the same process we used to come up with the list. Start with the most important data, and work backwards based on the issues we’ve already found. The best way I know to get everyone on the same page is to have a streamlined meeting between the key influencers of security priorities. That involves folks not just within the IT team, but also probably some tech-savvy business users – since it’s their data at risk. Yes, we are going to go back to them later, once we have the plan. But it doesn’t hurt to give them a heads up early in the process about what the highest priority risks are, and get their buy-in early and often throughout the process. Remediation Plan Now comes the fun part: we have to figure out what’s involved in addressing each of the leaky buckets. That means figuring out whether you need to deploy a new product, or optimize a process, or both. Keep in mind that for each of the discrete issues, you want to define the fix, the cost, the effort (in hours), and the timeframe commitment to get it done. No, none of this is brain surgery, and you probably have a number of fixes on your project plan already. But hopefully this process provides the needed incentive to get some of these projects moving. Once the first draft of the plan is completed, start lining up the project requirements with the reality of budget and availability of resources. That way when it comes time to present the plan to management (including milestones and commitments), you have already had the visit with Mr. Reality so you can stick to what is feasible. Quick Wins As you are doing the analysis to build the remediation plan, it’ll be obvious that some fixes are cheap and easy. We recommend you take the risk (no pun intended) and take care of those issues first. Regardless of where they end up on the risk priority list. Why? We want to build momentum behind the endpoint security program (or any program, for that matter) and that involves showing progress as quickly as possible. You don’t need to ask permission for everything. Communications The hallmark of any pragmatic security program (read more about the Pragmatic philosophy here) is frequent communications and senior level buy-in. So once we have the plan in place, and an idea of resources and timeframes, it’s time to get everyone back in the room to get thumbs up for the triage plan. You need to package up the triage plan in a way that makes sense to the business folks. That means thinking about business impact first, reality second, and technology probably not at all. These folks want to know what needs to be done, when it can get done, and what it will cost. We recommend you structure the triage pitch roughly like this: Risk Priorities – Revisit the priorities everyone has presumably already agreed to. Quick Wins – Go through the stuff that’s already done. That will usually put the bigwigs in a good mood, since things are already in motion. Milestones – These folks don’t want to hear the specifics of each project. They want the bottom line. When will each of the risk priorities be remediated? Dependencies – Now that you’ve told them what need to do, next tell them what constraints you are operating under. Are there budget issues? Are there resource issues? Whatever it is, make sure you are very candid about what can derail efforts and impact milestones. Sign-off – Then you get them to sign in blood as to what will get done and when. Dealing with Shiny Objects To be clear, getting to this point in the process tends to be a straightforward process. Senior management knows stuff needs to get done and your initial should plans present a good way to get those things done. But the challenge is only beginning, because as you start executing on your triage plan, any number of other priorities will present that absolutely, positively, need to be dealt with. In order to have any chance to get through the triage list, you’ll need to be disciplined about managing expectations relative to the impact of each shiny object on your committed milestones. We also recommend a monthly meeting with the influencers to revisit the timeline and recast the milestones – given the inevitable slippages due to other priorities. OK, enough of this program management stuff. Next in this series, we’ll tackle some of the technical fundamentals, like software updates, secure configuration, and malware detection. Other posts in the Endpoint Security Fundamentals Series Introduction Prioritize: Finding the Leaky Buckets Share:

Share:
Read Post

ESF: Prioritize: Finding the Leaky Buckets

As we start to dig into the Endpoint Security Fundamentals series, the first step is always to figure out where you are. Since hope is not a strategy, you can’t just make assumptions about what’s installed, what’s configured correctly, and what the end users actually know. So we’ve got to figure that out, which involves using some of the same tactics our adversaries use. The goal here is twofold: first you need to figure out what presents a clear and present danger to your organization, and put a triage plan in place to remediate those issues. Secondly, you need to manage expectations at all points in this process. That means documenting what you find (no matter how ugly the results) and communicating that to management, so they understand what you are up against. To be clear, although we are talking about endpoint security here, this prioritization (and triage) process should be the first steps in any security program. Assessing the Endpoints In terms of figuring out your current state, you need to pay attention to a number of different data sources – all of which yield information to help you understand the current state. Here is a brief description of each and the techniques to gather the data. Endpoints – Yes, the devices themselves need to be assessed for updated software, current patch levels, unauthorized software, etc. You may have a bunch of this information via a patch/configuration management product or as part of your asset management environment. To confirm that data, we’d also recommend you let a vulnerability scanner loose on at least some of the endpoints, and play around with automated pen testing software to check for exploitability of the devices. Users – If we didn’t have to deal with those pesky users, life would be much easier, eh? Well, regardless of the defenses you have in place, an ill-timed click by a gullible user and you are pwned. You can test users by sending around fake phishing emails and other messages with fake bad links. You can also distribute some USB keys and see how many people actually plug them into machines. These “attacks” will determine pretty quickly whether you have an education problem and what other defenses you may need, to overcome those issues. Data – I know this is about endpoint security, but Rich will be happy to know doing a discovery process is important here as well. You need to identify devices with sensitive information (since those warrant a higher level of protection) and the only way to do that is to actually figure out where the sensitive data is. Maybe you can leverage other internal efforts to do data discovery, but regardless, you need to know which devices would trigger a disclosure if lost/compromised. Network – Clearly devices already compromised need to be identified and remediated quickly. The network provides lots of information to indicate compromised devices. Whether it’s looking at network flow data, anomalous destinations, or alerts on egress filtering rules – the network is a pretty reliable indicator of what’s already happened, and where your triage efforts need to start. Keep in mind that it is what it is. You’ll likely find some pretty idiotic things happening (or confirm the idiotic things you already knew about), but that is all part of the process. The idea isn’t to get overwhelmed, it’s to figure out how much is broken so you can start putting in place a plan to fix it, and then a process to make sure it doesn’t happen so often. Prioritizing the Risks Prioritization is more art than science. After spending some time gathering data from the endpoints, users, data, and network, how do you know what is most important? Not to be trite, but it’s really a common sense type thing. For example, if your network analysis showed a number of endpoints already compromised, it’s probably a good idea to start by fixing those. Likewise, if your automated pen test showed you could get to a back-end datastore of private information via a bad link in an email (clicked on by an unsuspecting user), then you have a clear and present danger to deal with, no? After you are done fighting the hottest fires, the prioritization really gets down to who has access to sensitive data and making sure those devices are protected. This sensitive data could be private data, intellectual property, or anything else you don’t want to see on the full-disclosure mailing list. Hopefully your organization knows what data is sensitive, so you can figure out who has access to that data and build the security program around protecting that access. In the event there is no internal consensus about what data is important, you can’t be bashful about asking questions like, “why does that sales person need the entire customer database?” and “although it’s nice that the assistant to the assistant controller’s assistant wants to work from home, should he have access to the unaudited financials?” Part of prioritizing the risk is to identify idiotic access to sensitive data. And not everything can be a Priority 1. Jumping on the Moving Train In the real world, you don’t get to stop everything and start your security program from scratch. You’ve already got all sorts of assessment and protection activities going on – at least we hope you do. That said, we do recommend you take a step back and not be constrained to existing activities. Existing controls are inputs to your data gathering process, but you need to think bigger about the risks to your endpoints and design a program to handle them. At this point, you should have a pretty good idea of which endpoints are at significant risk and why. In the next post, we’ll discuss how to build the triage plan to address the biggest risks and get past the fire fighting stage. Endpoint Security Fundamentals Series Introduction Share:

Share:
Read Post

Endpoint Security Fundamentals: Introduction

As we continue building out coverage on more traditional security topics, it’s time to focus some attention on the endpoint. For the most part, many folks have just given up on protecting the endpoint. Yes, we all go through the motions of having endpoint agents installed (on Windows anyway), but most of us have pretty low expectations for anti-malware solutions. Justifiably so, but that doesn’t mean it’s game over. There are lots of things we can do to better protect the endpoint, some of which were discussed in Low Hanging Fruit: Endpoint Security. But let’s not get the cart ahead of the horse. First off, nowadays there are lots of incentives for the bad guys to control endpoint devices. There is usually private data on the device, including nice things like customer databases – and with the strategic use of keyloggers, it’s just a matter of time before bank passwords are discovered. Let’s not forget about intellectual property on the devices, since lots of folks just have to have their deepest darkest (and most valuable) secrets on their laptop, within easy reach. Best of all, compromising an endpoint device gives the bad guys a foothold in an organization, and enables them to compromise other systems and spread the love. The endpoint has become the path of least resistance, mostly because of the unsophistication of the folks using said devices doing crazy Web 2.0 stuff. All that information sharing certainly seemed like a good idea at the time, right? Regardless of how wacky the attack, it seems at least one stupid user will fall for it. Between web application attacks like XSS (cross-site scripting), CSRF (cross-site request forgery), social engineering, and all sorts of drive-by attacks, compromising devices is like taking candy from a baby. But not all the blame can be laid at the feet of users, because many attacks are pretty sophisticated, and even hardened security professionals can be duped. Combine that with the explosion of mobile devices, whose owners tend to either lose them or bring back bad stuff from coffee shops and hotels, and you’ve got a wealth of soft targets. And as the folks tasked with protecting corporate data and ensuring compliance, we’ve got to pay more attention to locking down the endpoints – to the degree we can. And that’s what the Endpoint Security Fundamentals series is all about. Philosophy: Real-world Defense in Depth As with all of Securosis’ research, we focus on tactics to maxize impact for minimal effort. In the real world, we may not have the ability to truly lock down the devices since those damn users want to do their jobs. The nerve of them! So we’ve focused on layers of defense, not just from the standpoint of technology, but also looking at what we need to do before, during, and after an incident. Prioritize – This will warm the hearts of all the risk management academics out there, but we do need to start the process by understanding which endpoint devices are most at risk because they hold valuable data, for a legitimate business reason – right? Assess the current status – Once we know what’s important, we need to figure out how porous our defenses are, so we’ll be assessing the endpoints. Focus on the fundamentals – Next up, we actually pick that low hanging fruit and do the thing that we should be doing anyway. Yes, things like keeping software up to date, leveraging what we can from malware defense, and using new technologies like personal firewalls and HIPS. Right, none of this stuff is new, but not enough of us do it. Kind of like… no, I won’t go there. Building a sustainable program – It’s not enough to just implement some technology. We also need to do some of those softer management things, which we don’t like very much – like managing expectations and defining success. Ultimately we need to make sure the endpoint defenses can (and will) adapt to the changing attack vectors we see. Respond to incidents – Yes, it will happen to you, so it’s important to make sure your incident response plan factors in the reality that an endpoint device may be the primary attack vector. So make sure you’ve got your data gathering and forensics kits at the ready, and also have an established process for when a remote or traveling person is compromised. Document controls – Finally, the auditor will show up and want to know what controls you have in place to protect those endpoints. So you also need to focus on documentation, ensuring you can substantiate all the tactics we’ve discussed thus far. The ESF Series To provide an little preview of what’s to come, here is how the series will be structured: Prioritize: Finding the Leaky Buckets Triage: Fixing the Leaky Buckets Fundamentals: Leveraging existing technologies (a few posts covering the major technology areas) The Endpoint Security Program: Systematizing Protection Incident Response: Responding to an endpoint compromise Compliance: Documenting Endpoint Controls As with all our research initiatives, we count on you to keep us honest. So check out each piece and provide your feedback. Tell me why I’m wrong, how you do things differently, or what we’ve missed. Share:

Share:
Read Post

Incite 3/31/2010: Attitude Is Everything

There are people who suck the air out of the room. You know them – they rarely have anything good to say. They are the ones always pointing out the problems. They are half-empty type folks. No matter what it is, it’s half-empty or even three-quarters empty. The problem is that my tendency is to be one of those people. I like to think it’s a personality thing. That I’m just wired to be cynical and that it makes me good at my job. I can point out the problems, and be somewhat constructive about how to solve them. But that’s a load of crap. For a long time I was angry and that made me cynical. But I have nothing to be angry about. Sure I’ve gotten some bad breaks, but show me a person who hasn’t had things go south at one point or another. I’m a lucky guy. My family loves me. I have a great time at work. I have great friends. One of my crosses to bear is to just remember that – every day. A good attitude is contagious. And so is a bad attitude. My first step is awareness. I make a conscious effort to be aware of the vibe folks are throwing. When I’m at a coffee shop, I’ll take a break and just try to figure out the tone of the room. I’ll focus on the folks in the room having fun, and try to feed off that. I also need to be aware when I need an attitude adjustment. Another reason I’m really lucky is that I can choose who I’m around most of the time. I don’t have to sit in meetings with Mr. Wet Blanket. And if I’m doing a client engagement with someone with the wrong attitude, I just call them out on it. What do I care? I’m there to do a job and people with a bad attitude get in my way. Most folks have to be more tactful, but that doesn’t mean you need to just take it. You are in control of your own attitude, which is contagious. Keep your attitude in a good place and those wet blankets have no choice but to dry up a little. And that’s what I’m talking about. – Mike. Photo credit: “Bad Attitude” originally uploaded by Andy Field Incite 4 U What’s that smell? Is it burnout? – Speaking of bad attitudes, one of the major contributors to a crappy outlook is burnout. This post by Dan Lohrmann deals with some of the causes and some tactics to deal with it. For me, the biggest issue is figuring out whether it’s a cyclical low, or it’s not going to get better. If it’s the former, appreciate that some days you feel like crap. Sometimes it’s a week, but it’ll pass. If it’s the latter start looking for another gig, since burnout can result from not being successful, and not having the opportunity to be successful. That doesn’t usually get better by sticking around. – MR Screw the customers, save the shareholders – Despite their best attempts to prevent disclosure, it turns out that JC Penney was ‘Company A’ in the indictment against Alberto Gonzales that didn’t work for the Bush administration. Penney fought disclosure of their name tooth and nail, claiming it would cause “confusion and alarm” and “may discourage other victims of cyber-crimes to report the criminal activity or cooperate with enforcement officials for fear of the retribution and reputational damage.” In other words, forget about the customers who might have been harmed – we care about our bottom line. Didn’t they learn anything from TJX? It isn’t like disclosure will actually lose you customers, $202 per record and all be damned. – RM Hard filters, injected – SQL injection remains a problem as the attacks are difficult to detect and can often be masked, and detection scripts can fooled by attackers gaming scanning techniques to find stealthy injection patterns. It seems like a fool’s errand, as you foil one attack and attackers just find some other syntax contortion that gets past your filter. Exploiting hard filtered SQL Injections is a great post on the difficulties of scanning SQL statements and how attackers work around defenses. It’s a little more technical, but it walks through various practical attacks, explaining the motivations behind attacks and plausible defenses. The evolution of this science is very interesting. – AL The FTC can haz your crap seal – I ranted a few weeks ago about these web security seals, and the fact they some are bad jokes – just as a number of new vendors are rolling out their own shiny seals. Sure there seems to be a lot of money in it, but promoting a web security seal as a panacea for customer data protection could get you a visit from some nice folks at the Federal Trade Commission. Except they probably aren’t that nice, as they are shutting down those programs. Especially when the vendor didn’t even test the web site – methinks that’s a no-no. Maybe I should ask ControlScan about that – as RSnake points out, they settled with the FTC on deceptive security seals. As Barnum said, there is a sucker born every minute. – MR The Google smells a bit (skip)fishy – Last week Google launched Skipfish. Even though I was on vacation I found a few minutes to download and try it out. From the Google documentation: “Skipfish is an active web application security reconnaissance tool. It prepares an interactive sitemap for the targeted site by carrying out a recursive crawl and dictionary-based probes … The final report generated by the tool is meant to serve as a foundation for professional web application security assessments.” The tool is not bad, and it was pretty fast, but I certainly did not stress test it. But the question on my mind is ‘why’? And no, not “why would I use this tool”, but why

Share:
Read Post

FireStarter: Nasty or Not, Jericho Is Irrelevant

It seems the Jericho Forum is at it again. I’m not sure what it is, but they are hitting the PR circuit talking about their latest document, a Self-Assessment Guide. Basically this is a list of “nasty” questions end users should ask vendors to understand if their products align with the Jericho Commandments. If you go back and search on my (mostly hate) relationship with Jericho, you’ll see I’m not a fan. I thought the idea of de-perimeterization was silly when they introduced it, and almost everyone agreed with me. Obviously the perimeter was changing, but it clearly was not disappearing. Nor has it. Jericho fell from view for a while and came back in 2006 with their commandments. Most of which are patently obvious. You don’t need Jericho to tell you that the “scope and level of protection should be specific and appropriate to the asset at risk.” Do you? Thankfully Jericho is there to tell us “security mechanisms must be pervasive, simple, scalable and easy to manage.” Calling Captain Obvious. But back to this nasty questions guide, which is meant to isolate Jericho-friendly vendors. Now I get asking some technical questions of your vendors about trust models, protocol nuances, and interoperability. But shouldn’t you also ask about secure coding practices and application penetration tests? Which is a bigger risk to your environment: the lack of DRM within the system or an application that provides root to your entire virtualized datacenter? So I’ve got a couple questions for the crowd: Do you buy into this de-perimeterization stuff? Have these concepts impacted your security architecture in any way over the past ten years? What about cloud computing? I guess that is the most relevant use case for Jericho’s constructs, but they don’t mention it at all in the self-assessment guide. Would a vendor filling out the Jericho self-assessment guide sway your technology buying decision in any way? Do you even ask these kinds of questions during procurement? I guess it would be great to hear if I’m just shoveling dirt on something that is already pretty much dead. Not that I’m above that, but it’s also possible that I’m missing something. Share:

Share:
Read Post

Security Innovation Redux: Missing the Forest for the Trees

There was a great level of discourse around Rich’s FireStarter on Monday: There is No Market for Security Innovation. Check out the comments to get a good feel for the polarization of folks on both sides of the discussion. There were also a number of folks who posted their own perspectives, ranging from Will Gragido at Cassandra Security, Adam Shostack on the New School blog, to the hardest working man in showbiz, Alex Hutton at Verizon Business. All these folks made a number of great points. But part of me thinks we are missing the forest for the trees here. The FireStarter was really about new markets and the fact that it’s very very hard for innovative technology to cross the chasm unless it’s explicitly mandated by a compliance regulation. I strongly believe that, and we’ve seen numerous examples over the past few years. But part of Alex’s post dragged me back to my Pragmatic philosophy, when he started talking about how “innovation” isn’t really just constrained to a new shiny widget that goes into a 19” rack (or a hypervisor). It can be new uses for stuff you already have. Or working the politics of the system a bit better internally by getting face time with business leaders. I don’t really call these tactics innovation, but I’m splitting hairs here. My point, which I tweeted, is “Regardless of innovation in security, most of the world doesn’t use they stuff they already have. IMO that is the real problem.” Again, within this echo chamber most of us have our act together, certainly relative to the rest of the world. And we are passionate about this stuff, like Charlie Miller fuzzing all sorts of stuff to find 0-day attacks, while his kids are surfing on the Macs. So we get all excited about Pwn2Own and other very advanced stuff, which may or may not ever become weaponized. We forget the rest of the world is security Neanderthal man. So part of this entire discussion about innovation seems kind of silly to me, since most of the world can’t use the tools they already have. Share:

Share:
Read Post

Announcing NetSec Ops Quant: Network Security Metrics Suck. Let’s Fix Them.

The lack of credible and relevant network security metrics has been a thorn in my side for years. We don’t know how to define success. We don’t know how to communicate value. And ultimately, we don’t even know what we should be tracking operationally to show improvement (or failure) in our network security activities. But we in the echo chamber seem to be happier bitching about this, or flaming each other on mailing lists, than focusing on finding a solution. Some folks have tried to drive towards a set of metrics that make sense, but I can say most of the attempts are way too academic and also cost too much to collect to be usable in everyday practice. Not to mention that most of our daily activities aren’t even included in the models. Not to pick on them too much, but I think these issues are highlighted in the way the Center for Internet Security has scoped out network security metrics. Basically, they didn’t. They have metrics on Incident Management, Vulnerability Management, Patch Management, Configuration Change Management, Application Security, and Financial Metrics. So the guy managing the network security devices doesn’t count? Again, I know CIS is working towards a lot of other stuff, but the reality is the majority of security spending is targeted at the network and endpoint domains, and there are no good metrics for those. So let’s fix it. Today, we are kicking off the next in our series of Quant projects. This one is called Network Security Operations Quant, and we aim to build a process map and underlying cost model for how organizations manage their network security devices. The project’s formal objective and scope are: The objective of Network Security Operations Quant is to develop a cost model for monitoring and managing network security devices that accurately reflects the associated financial and resource costs. Secondarily, we also want to: Build the model in a manner that supports use as an operational efficiency model to help organizations optimize their network security monitoring and management processes, and compare costs of different options. Heavily engage the community and produce an open model with wide support and credibility, using the Totally Transparent Research process. Advance the state of IT metrics, particularly operational security metrics. We are grateful to our friends at SecureWorks, who are funding this primary research effort. As with all our quant processes, our methodology is: Establish the high level process map via our own research. Use a broad survey to validate and identify gaps in the process map. Define a set of subprocesses for each high-level process. Build metrics for each subprocess. Assemble the metrics into a model which can be used to track operational improvement. From a scoping standpoint, we are going to deal with 5 different network security processes: Monitoring firewalls Monitoring IDS/IPS Monitoring server devices Managing firewalls Managing IDS/IPS Yes, we know network security is bigger than just these 5 functions, but we can’t boil the ocean. There is a lot of other stuff we’ll model out using the Quant process over the next year, but this should be a good start. Put up or shut up We can’t do this alone. So we are asking for your help. First off, we are going to put together a “panel” of organizations to serve as the basis for our initial primary research. That means we’ll be either doing site visits or detailed phone interviews to understand how you undertake network security processes. We’ll also need the folks on the panel to shoot holes in our process maps before they are posted for public feedback. We are looking for about a dozen organizations from a number of different verticals and company sizes (large enterprise to mid-market). As with all our research, there will be no direct attribution to your organization. We are happy to sign NDAs and the like. If you are interested in participating, please send me an email directly at mrothman (at) securosis . com. Once the initial process maps are posted, we will post a survey to find out whether you actually do the steps we identify. We’ll also want your feedback on the process via posts that describe each step in the process. Everyone has an opportunity to participate and we hope you will take us up on it. This is possibly the coolest research project I’ve personally been involved with and I’m really excited to get moving on it. We look forward to your participation, so we finally can get on the same page, and figure out how to measure how we “network security plumbers” do our business. Share:

Share:
Read Post

Bonus Incite 3/19/2010: Don’t be LHF

I got a little motivated this AM (it might have something to do with blowing off this afternoon to watch NCAA tourney games) and decided to double up on the Incite this week. I read Adrian’s Friday Summary intro this and it kind of bothered me. Mostly because I don’t know the answers either, and I find questions that I can’t answer cause me stress and angst. Maybe it’s because I like to be a know-it-all and it sucks when your own limitations smack you upside the head. Anyhow, what do we do about this whole information sharing culture we’ve created – and more importantly, how do we make sure the next generation is protected from the new age scam artists who prey on over-sharers? I came across this coverage from RSA of Hugh Thompson’s interviews of Craigslist and the Woz. Both Newmark and Wozniak believe education is the answer. Truth be told, I have mixed feelings. I know the futility of widespread education because you can’t possibly keep up with the attackers, not within a mass market context. Yet my plan is still to use education as one of a few tactics that I’ll use to keep my kids (and the Boss) safe online. The reality is that because my kids will be trained on how to recognize fraud and what not to do online, they will be ahead of 95% of the other folks out there. And remember, most attackers prey on the lowest hanging fruit. As long as my kids aren’t that, I think things will work out OK. But I also maintain pretty tight controls on the machines they use and the network they connect to. As they get more sophisticated, so will the defenses. I’ll implement a kids’ browsing network, and segment out my business machines and sensitive data). I already lock down their devices so they can’t install software (unless I know about it). At some point, they’ll get their own machines and I’ll centralize the file storage (both for backup and oversight), so I can easily rebuild their machines every couple months. And we’ve got a lot of controls to protect our finances as well. We check the credit cards frequently (to ensure unauthorized transactions get caught quickly) and have a home incident response plan in the event one of my devices does get pwned. Of course, that doesn’t answer the question of how to solve the macro problem, but honestly I’m not sure we can. Fraud has been happening since the beginning of time, and it’s a bit crazy to think we could stop it entirely. But I can work my ass off to minimize the impact of the bad guys on my own situation, which is a pretty good objective – both at home and at work. Have a great weekend. – Mike. Photo credit: “that low-hanging fruit they keep talking about in meetings” originally uploaded by travelskerricks Bonus Incite 4 U Getting screwed by the back channel – I read a recent post from the security career counselors (Mike Murray and Lee Kushner) and it got my goat a bit. The post was about how to deal with negative references, and I’m sensitive to this. I’ve been in a situation where a former boss sent a torpedo through my engine room as I had a new job lined up and closed. It was during a back channel conversation so I had no recourse (even though there was a non-disparagement clause in my exit agreement). Mike and Lee suggest first assembling a list of positive references that can offset a negative reference, as well as being candid with your prospective employer about the issues. This is great advice, since that’s exactly how I dealt with the situation. I did my own backchannel work and got folks inside the company to talk about me (on deep background), as well as confronting the situation head on. It worked out for me, but everyone needs to have contingency plans for everything, and a negative reference is certainly one of them. – MR Isn’t UTM a hopping market? – From all the market share projections and growth numbers, the UTM (unified threat management) market is growing like gangbusters. Yet you see companies like Symantec (a few years ago) and McAfee (who recently shut down their SnapGear offering) getting out of the business. The reality is there are multiple market segments in network security and they require different solutions. UTM can be applicable to large enterprises, but they don’t buy combined solutions. They evaluate the products on a function-by-function basis. So they will compare the UTM-based IPS to the stand-alone IPS and so on, before they decide whether to embrace an integrated solution. Whereas the mid-market wants a toaster to make their problems go away. So hats off to McAfee for deciding they didn’t have a competitive offering or leveraged path to market, and getting out of the business. One of the hardest things to do is kill a product, no matter how competitive it is. Strong companies need to kill things, or they become overpopulated and operate sub-optimally. – MR Stupid is as stupid does – I recently watched Forrest Gump again, and it’s a treasure trove of little saying that really apply to our daily existence. We are security professionals, which mean we should understand risks and act accordingly. How can you tell your internal users to do something if you don’t do it yourself? I guess you can, but come back into the shop after having your own machine pwned and see how much credibility you have left. So when I see the inevitable reports from security conferences about how stupid our own professionals are, it makes me nuts. At the RSA show, Motorola AirDefense found all sorts of wireless stupidity from the attendees, and it’s really nutty. If you don’t have a 3G card, then just make due without connecting for a few hours while you are at the show. You have a mobile device and if it’s that important, go back to your hotel. At a security show they

Share:
Read Post

Network Security Fundamentals: Egress Filtering

As we wrap up our initial wave of Network Security Fundamentals, we’ve already discussed Default Deny, Monitoring everything, Correlation, and Looking for Not Normal. Now it’s time to see if we can actually get in the way of some of these nasty attacks. So what are we trying to block? Basically a lot of the issues we find through looking for not normal. The general idea involves implementing a positive security model not just to inbound traffic (default deny), but to outbound traffic as well. This is called egress filtering, and in practice is basically turning your perimeter device inside out and applying policies to outbound traffic. This defensive tactic ensures that non-standard ports and protocols don’t make their way out of your network. Filtering can also block reconnaissance tactics, network enumeration techniques, outbound spam bots, and those pesky employees running Internet businesses from within your corporate network. Amazingly enough this still happens, and too many organizations are none the wiser. Defining Egress Filtering Policies Your best bet is to start with recent incidents and their root causes. Define the outbound ports and protocols which allowed the data to be exfiltrated from your network. Yes, this is obvious, but it’s a start and you don’t want to block everything. Not unless you enjoy being ritually flayed by your users. Next leverage the initial steps in the Fundamentals series and analyze correlated data to determine what is normal. Armed with this information, next turn to the recent high-profile attacks getting a lot of airtime. Think Aurora and learn how that attack exfiltrates data (custom encrypted protocol on ports 443). For such higher-probability attacks, define another set of egress filtering rules to make sure you block (or at least are notified) when you have outbound traffic on the ports used during the attacks. You can also use tighter location-based filtering policies, like not allowing traffic to countries where you don’t do business. This won’t work for mega-corporations doing business in every country in the world, but for the other 99.99% of you, it’s an option. Or you could enforcing RFC standards on Port 80 and 443 to make sure no custom protocol is hiding anything in a standard HTTP stream. Again, there are lots of different ways to set up your egress filtering rules. Most can help, depending on the nature of your network traffic, none are a panacea. Whichever you decide to implement, make sure you are testing the rules in non-blocking mode first to make sure nothing breaks. Blocking or Alerting As you can imagine, it’s a dicey proposition to start blocking traffic that may break legitimate applications. So take care when defining these rules, or take the easy way out and just send alerts when one of your egress policies is violated. Of course, the alerting approach can (and probably will) result in plenty of false positives, but as you tune the policies, you’ll be able to minimize that. Which brings up the hard truth of playing around with these policies. There are no short cuts. Vendors who talk about self-defending anything, or learning systems, or anything else that doesn’t involve the brutal work of defining policies and tuning them over time until they work in your environment, basically doesn’t spend enough time in the real world. ‘nuff said. To finish our discussion of blocking, again think about these rules in terms of your IPS. You block the stuff you know is bad, and you alert on the stuff you aren’t sure about. Let’s hope you aren’t so buried under alerts that something important gets by, but that’s life in the big city. No Magic Bullets Yes, we believe egress filtering is a key control in your security arsenal, but as with everything else, it’s not a panacea. There are lots of attacks which will skate by undetected, including those that send traffic over standard ports. So once again, it’s important to look at other controls to provide additional layers of defense. These may include outbound content filtering, application-aware perimeter devices, deep packet inspection, and others. More Network Security Fundamentals I’m going to switch gears a bit and start documenting Endpoint Security Fundamentals next week, but be back to networks soon enough, getting into wireless security, network pen testing, perimeter change control, and outsourced perimeter monitoring. Stay tuned. Share:

Share:
Read Post

Incite 3/17/2010: Seeing the Enemy

“WE HAVE MET THE ENEMY AND HE IS US.” POGO (1970) I’ve worked for companies where we had to spend so much time fighting each other, the market got away. I’ve also worked at companies where internal debate and strife made the organization stronger and the product better. But there are no pure absolutes – as much as I try to be binary, most companies include both sides of the coin. But when I read of the termination of Pennsylvania’s CISO because he dared to actually talk about a breach, it made me wonder – about everything. Dennis hit the nail on the head, this is bad for all of us. Can we be successful? We all suffer from a vacuum of information. That was the premise of Adam Shostack and Andrew Stewart’s book The New School of Information Security. That we need to share information, both good and bad, flattering and unflattering – to make us better at protecting stuff. Data can help. Unfortunately most of the world thinks that security through obscurity is the way to go. As Adrian pointed out in Monday’s FireStarter, there isn’t much incentive to disclose anything unless an organization must – by law. The power of negative PR grossly outweighs the security benefit of information sharing. Which is a shame. So what do you do? Give up? Well, actually maybe you do give up. Not on security in general, but on your organization. Every day you need to figure out if you can overcome the enemy within your four walls. If you can’t, then move on. I know, now is the wrong time to leave a job. I get that. But how long can you go in every day and get kicked in the teeth? Only you can decide that. But if your organization is a mess, don’t wait for it to get better. If you do decide to stay, you need to discover the power of the peer group. Your organization will not sanction it, and don’t blame me, but find a local or industry group of peeps where you can share your dirt. You take a blood oath (just like in grade school) that what is spoken about in the group stays within the group and you spill the beans. You learn from what your peers have done, and they learn from you. At this point we must acknowledge that widespread information sharing is not going to happen. Which sucks, but it is what it is. So we need to get creative and figure out an alternative means to get the job done. Find your peeps and learn from them. – Mike. Photo credit: “Pogo – Walt Kelly (1951) – front cover” originally uploaded by apophysis_rocks Incite 4 U Time to study marketing too… – RSnake is starting to mingle with some shady characters. Well, maybe not shady, but certainly on the wrong side of the rule of law. One of his conclusions is that it’s getting harder for the bad guys to do their work, at least the work of compromising meaty valuable targets. That’s a good thing. But the black hats are innovative and playing for real money, so they will figure something out and their models will evolve to continue generating profits. It’s the way of the capitalist. This idea of assigning a much higher value to a zombie within the network of a target makes perfect sense. It’s no different than how marketing firms charge a lot more for leads directly within the target market. So it’s probably not a bad idea for us security folks to study a bit of marketing, which will tell us how the bad guys will evolve their tactics. – MR Lies, Damn Lies, and Exploits – We’ve all been hearing a ton about that new “Aurora” exploit (mostly because of all the idiots who think it’s the same thing as APT), but NSS Labs took a pretty darn interesting approach to all the hype. Assuming that every anti-malware vendor on the market would block the known Aurora exploit, they went ahead and tested the major consumer AV products against fully functional variants. NSS varied both the exploit and the payload to see which tools would still block the attack. The results are uglier than a hairless cat with a furball problem. Only one vendor (McAfee) protected against all the variants, and some (read the report yourself) couldn’t handle even the most minor changes. NSS is working on a test of the enterprise versions, but I love when someone ignites the snake oil. – RM I hate C-I-A – Confidentiality, Integrity, and Availability is what it stands for. I was reminded of this reading this CIA Triad Post earlier today. Every person studying for their CISSP is taught that this is how they need to think about security. I always felt this was BS, along with a lot of other stuff they teach in CISSP classes, but that’s another topic. CIA just fails to capture the essence of security. Yeah, I have to admit that CIA represents three handy buckets that can compartmentalize security events, but they so missed the point about how one should approach security that I have become repulsed by the concept. Seriously, we need something better. Something like MSB. Misuse-Spoof-Break. Do something totally unintended, do something normal pretending to be someone else, or change something. Isn’t that a better way to think about security threats? It’s the “What can we screw with next?” triad. And push “denial of service” to the back of your mind. Script kiddies used to think it was fun, and some governments still do, but when it comes to hacking, it’s nothing more than a socially awkward cousin of the other three. – AL Signatures in burglar alarm clothing – Pauldotcom, writing with his Tenable hat on, explains a method he calls “burglar alarms,” as a way to deflate some APT hype. This method ostensibly provides a heads-up on attacks we haven’t seen before. He uses this as yet another example of how to detect an APT. I know I’m not the sharpest tool in the shed, but I don’t

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.