Securosis

Research

Incite 9/29/2010: Reading Is Fundamental

For those of you with young kids, the best practice is to spend some time every day reading to them. so they learn to love books. When our kids were little, we dutifully did that, but once XX1 got proficient she would just read by herself. What did she need us for? She has inhaled hundreds of books, but none resonate like Harry Potter. She mowed through each Potter book in a matter of days, even the hefty ones at the end of the series. And she’s read each one multiple times. In fact, we had to remove the books from her room because she wasn’t reading anything else. The Boss went over to the book store a while back and tried to get a bunch of other books to pique XX1’s interest. She ended up getting the Percy Jackson series, but XX1 wasn’t interested. It wasn’t Harry Potter or even Captain Underpants, so no sale. Not wanting to see a book go unread, I proceeded to mow through it and really liked it. And I knew XX1 would like it too, if she only gave it a chance. So the Boss and I got a bit more aggressive. She was going to read Percy Jackson, even if we had to bribe her. So we did, and she still didn’t. It was time for drastic measures. I decided that we’d read the book together. The plan was that every night (that I was in town anyway), we would read a chapter of The Lightning Thief. That lasted for about three days. Not because I got sick of it, and not because she didn’t want to spend time with me. She’d just gotten into the book and then proceeded to inhale it. Which was fine by me because I already read it. We decided to tackle Book 2 in the series, the Sea of Monsters, together. We made it through three chapters, and then much to my chagrin she took the book to school and mowed through three more chapters. That was a problem because at this point, I was into the book as well. And I couldn’t have her way ahead of me – that wouldn’t work. So I mandated she could only read Percy Jackson with me. Yes, I’m a mean Dad. For the past few weeks, every night we would mow through a chapter or two. We finished the second book last night. I do the reading, she asks some questions, and then at the end of the chapter we chat a bit. About my day, about her day, about whatever’s on her mind. Sitting with her is a bit like a KGB interview, without the spotlight in my face. She’s got a million questions. Like what classes I took in college and why I lived in the fraternity house. There’s a reason XX1 was named “most inquisitive” in kindergarten. I really treasure my reading time with her. It’s great to be able to stop and just read. We focus on the adventures of Percy, not on all the crap I didn’t get done that day or how she dealt with the mean girl on the playground. Until we started actually talking, I didn’t realize how much I was missing by just swooping in right before bedtime, doing our prayer and then moving on to the next thing on my list. I’m excited to start reading the next book in the series, and then something after that. At some point, I’m sure she’ll want to be IM’ing with her friends or catching up on homework as opposed to reading with me. But until then, I’ll take it. It’s become one of the best half hours of my day. Reading is clearly fundamental for kids, but there’s something to be said for its impact on parents too. – Mike Photo credits: “Parenting: Ready, Set, Go!” originally uploaded by Micah Taylor Recent Securosis Posts The Securosis 2010 Data Security Survey Report Rates the Top 5 Data Security Controls Attend the Securosis/SearchSecurity Data Security Event on October 26 Proposed Internet Wiretapping Law Fundamentally Incompatible with Security Government Pipe Dreams Friday Summary: September 24, 2010 Monitoring up the Stack: File Integrity Monitoring DAM, Part 1 NSO Quant Posts NSO Quant: Clarifying Metrics NSO Quant: Manage Metrics – Signature Management NSO Quant: Manage Metrics – Process Change Request and Test/Approve NSO Quant: Manage Metrics – Deploy and Audit/Validate NSO Quant: Manage Metrics – Monitor Issues/Tune IDS/IPS NSO Quant: Health Metrics – Device Health LiquidMatrix Security Briefing: September 24 Incite 4 U Stuxnet comes from deep pockets – I know it’s shocking, but we are getting more information about Stuxnet. Not just on the technical side, like this post by Gary McGraw on how it actually works. Clearly it’s targeting control systems and uses some pretty innovative tactics. So the conclusion emerging is that some kind of well-funded entity must be behind it. Let me award the “Inspector Clouseau” award for obvious conclusions. But I’m not sure it really matters who is behind the attack. We may as well blame the Chinese, since we blame them for everything. It really could have been anyone. Though it’s hard for me to see the benefit to a private enterprise or rich mogul of funding an effort like that. Of course we all have our speculations, but in the end let’s just accept that when there is a will there is a way for the attackers to break your stuff. And they will. – MR Are breaches declining? – One of the most surprising results in our big data security survey is that more people report breaches declining than increasing. 46% of you told us your breaches are about the same this year over last, with 12% reporting a few more or many more, and 27% reporting a few less or many less. Rsnake noticed the same trend in the DataLossDB, and is a bit skeptical. While I know not all breaches are reported (in violation of various regulations), I think a few factors are at play. I do think

Share:
Read Post

NSO Quant: The End is Near!

As mentioned last week, we’ve pulled the NSO Quant posts out of the main feed because the volume was too heavy. So I have been doing some cross-linking to let you who don’t follow that feed know when new stuff appears over there. Well, at long last, I have finished all the metrics posts. The final post is … (drum roll, please): NSO Quant: Health Metrics – Device Health I’ve also put together a comprehensive index post, basically because I needed a single location to find all the work that went into the NSO Quant process. Check it out, it’s actually kind of scary to see how much work went into this series. 47 posts. Oy! Finally, I’m in the process of assembling the final NSO Quant report, and that means I’m analyzing the survey data right now. If you want to have a chance at the iPad, you’ll need to fill out the survey (you must complete the entire survey to be eligible), by tomorrow at 5pm ET. We’ll keep the survey open beyond that, but the iPad will be gone. Given the size of the main document – 60+ pages – I will likely split out the actual metrics model into a stand-alone spreadsheet, so that and the final report should be posted within two weeks. Share:

Share:
Read Post

NSO Quant: Clarifying Metrics (and some more links)

We had a great comment by Dan on one of the metrics posts, and it merits an answer with explanation, because in the barrage of posts the intended audience can certainly get lost. Here is Dan’s comment: Who is the intended audience for these metrics? Kind of see this as part of the job, and not sure what the value is. To me the metrics that are critical around process are do the amount of changes align with the number of authorized requests. Do the configurations adhere to current policy requirements, etc… Just thinking about presenting to the CIO that I spent 3 hours getting consensus, 2 hours on prioritizing and not seeing how that gets much traction. One of the pillars of my philosophy on metrics is that there are really three sets of metrics that network security teams need to worry about. The first is what Dan is talking about, and that’s the stuff you need to substantiate what you are doing for audit purposes. Those are key issues and things that you have to be able to prove. The second bucket is numbers that are important to senior management. That tends to focus around incidents and spending. Basically how many incidents happen, how is that trending and how long does it take to deal with each situation. On the spending side, senior folks want to know about % of spend relative to IT spend, relative to total revenues, as well as how that compares to peers. Then there is the third bucket, which are the operational metrics that we use to improve and streamline our processes. It’s the old saw about how you can’t manage what you don’t measure – well, the metrics defined within NSO Quant represent pretty much everything we can measure. That doesn’t mean you should measure everything, but the idea of this project is to really decompose the processes as much as possible to provide a basis for measurement. Again, not all companies do all the process steps. Actually most companies don’t do much from a process standpoint – besides fight fires all day. Gathering this kind of data requires a significant amount of effort and will not be for everyone. But if you are trying to understand operationally how much time you spend on things, and then use that data to trend and improve your operations, you can get payback. Or if you want to use the metrics to determine whether it even makes sense for you to be performing these functions (as opposed to outsourcing), then you need to gather the data. But clearly the CIO and other C-level folks aren’t going to be overly interested in the amount of time it takes you to monitor sources for IDS/IPS signature updates. They care about outcomes, and most of the time you spend with them needs to be focused on getting buy-in and updating status on commitments you’ve already made. Hopefully that clarifies things a bit. Now that I’m off the soapbox, let me point to a few more NSO Quant metrics posts that went up over the past few days. We’re at the end of the process, so there are two more posts I’ll link to Monday, and then we’ll be packaging up the research into a pretty and comprehensive document. NSO Quant: Manage Metrics – Signature Management NSO Quant: Manage Metrics – Process Change Request and Test/Approve NSO Quant: Manage Metrics – Deploy and Audit/Validate NSO Quant: Manage Metrics – Monitor Issues/Tune IDS/IPS Share:

Share:
Read Post

Incite 9/22/2010: The Place That Time Forgot

I don’t give a crap about my hair. Yeah, it’s gray. But I have it, so I guess that’s something. It grows fast and looks the same, no matter what I do to it. I went through a period maybe 10 years ago where I got my hair styled, but besides ending up a bit lighter in the wallet (both from a $45 cut and all the product they pushed on me), there wasn’t much impact. I did get to listen to some cool music and see good looking stylists wearing skimpy outfits with lots of tattoos and piercings. But at the end of the day, my hair looked the same. And the Boss seems to still like me regardless of what my hair looks like, though I found cutting it too short doesn’t go over very well. So when I moved down to the ATL, a friend recommended I check out an old time barber shop in downtown Alpharetta. I went in and thought I had stepped into a time machine. Seems the only change to the place over the past 30 years was a new boom box to blast country music. They probably got it 15 years ago. Aside from that, it’s like time forgot this place. They give Double Bubble to the kids. The chairs are probably as old as I am. And the two barbers, Richard and Sonny, come in every day and do their job. It’s actually cool to see. The shop is open 6am-6pm Monday thru Friday and 6am-2pm on Saturday. Each of them travels at least 30 minutes a day to get to the shop. They both have farms out in the country. So that’s what these guys do. They cut hair, for the young and for the old. For the infirm, and it seems, for everyone else. They greet you with a nice hello, and also remind you to “Come back soon” when you leave. Sometimes we talk about the weather. Sometimes we talk about what projects they have going on at the farm. Sometimes we don’t talk at all. Which is fine by me, since it’s hard to hear with a clipper buzzing in my ear. When they are done trimming my mane to 3/4” on top and 1/2” on the sides, they bust out the hot shaving cream and straight razor to shave my neck. It’s a great experience. And these guys seem happy. They aren’t striving for more. They aren’t multi-tasking. They don’t write a blog or constantly check their Twitter feed. They don’t even have a mailing list. They cut hair. If you come back, that’s great. If not, oh well. I’d love to take my boy there, but it wouldn’t go over too well. The shop we take him to has video games and movies to occupy the ADD kids for the 10 minutes they take to get their haircuts. No video games, no haircut. Such is my reality. Sure the economy goes up and then it goes down. But everyone needs a haircut every couple weeks. Anyhow, I figure these guys will end up OK. I think Richard owns the building and the land where the shop is. It’s in the middle of old town Alpharetta, and I’m sure the developers have been chasing him for years to sell out so they can build another strip mall. So at some point, when they decide they are done cutting hair, he’ll be able to buy a new tractor (actually, probably a hundred of them) and spend all day at the farm. I hope that isn’t anytime soon. I enjoy my visits to the place that time forgot. Even the country music blaring from the old boom box… – Mike. Photo credits: “Rand Barber Shop II” originally uploaded by sandman Recent Securosis Posts Yeah, we are back to full productivity and then some. Over the next few weeks, we’ll be separating the posts relating to our research projects from the main feed. We’ll do a lot of cross-linking, so you’ll know what we are working on and be able to follow the projects interesting to you, but we think over 20 technically deep posts is probably a bit much for a week. It’s a lot for me, and following all this stuff is my job. We also want to send thanks to IT Knowledge Exchange, who listed our little blog here as one of their 10 Favorite Information Security Blogs. We’re in some pretty good company, except that Amrit guy. Does he even still have a blog? The Securosis 2010 Data Security Survey Report Rates the Top 5 Data Security Controls New Paper (+ Webcast): Understanding and Selecting a Tokenization Solution FireStarter: It’s Time to Talk about APT Friday Summary: September 17, 2010 White Paper Released: Data Encryption 101 for PCI DLP Selection Process: Infrastructure Integration Requirements Protection Requirements Defining the Content Monitoring up the Stack: Threats Introduction Understanding and Selecting an Enterprise Firewall: Advanced Features, Part 1 Advanced Features, Part 2 To UTM or Not to UTM? Selection Process NSO Quant Posts Manage Metrics – Signature Management Manage Metrics – Document Policies & Rules Manage Metrics – Define/Update Policies & Rules Manage Metrics – Policy Review Monitor Metrics – Validate and Escalate Monitor Metrics – Analyze Monitor Metrics – Collect and Store LiquidMatrix Security Briefing: September 20 September 21 Incite 4 U What’s my risk again? – Interesting comments from Intel’s CISO at the recent Forrester security conference regarding risk. Or more to the point, the misrepresentation of risk either towards the positive or negative. I figured he’d be pushing some ePO based risk dashboard or something, but it wasn’t that at all. He talked about psychology and economics, and it sure sounded like he was channeling Rich, at least from the coverage. Our pal Alex Hutton loves to pontificate about the need to objectively quantify risk and we’ve certainly had our discussions (yes, I’m being kind) about how effectively you can model risk. But the point is not necessarily to get a number, but

Share:
Read Post

NSO Quant: Manage Process Metrics, Part 1

We realized last week that we may have hit the saturation point for activity on the blog. Right now we have three ongoing blog series and NSO Quant. All our series post a few times a week, and Quant can be up to 10 posts. It’s too much for us to keep up with, so I can’t even imagine someone who actually has to do something with their days. So we have moved the Quant posts out of the main blog feed. Every other day, I’ll do a quick post linking to any activity we’ve had in the project, which is rapidly coming to a close. On Monday we posted the first 3 metrics posts for the Manage process. It’s the part where we are defining policies and rules to run our firewalls and IDS/IPS devices. Again, this project is driven by feedback from the community. We appreciate your participation and hope you’ll check out the metrics posts and tell us whether we are on target. So here are the first three posts: NSO Quant: Manage Metrics – Policy Review NSO Quant: Manage Metrics – Define/Update Policies and Rules NSO Quant: Manage Metrics – Document Policies and Rules Over the rest of the day, we’ll hit metrics for the signature management processes (for IDS/IPS), and then move into the operational phases of managing network security devices. Share:

Share:
Read Post

Understanding and Selecting an Enterprise Firewall: Selection Process

Now that we’ve been through the drivers for evolved, application-aware firewalls, and a lot of the technology enabling them, how does the selection process need to evolve to keep pace? As with most of our research at Securosis, we favor mapping out a very detailed process, and leaving you to decide which steps make sense in your situation. So we don’t expect every organization to go through every step in this process. Figure out which are appropriate for your organization and use those. To be clear, buying an enterprise firewall usually involves calling up your reseller and getting the paperwork for the renewal. But given that these firewalls imply new application policies and perhaps a different deployment architecture, some work must be done during selection process to get things right. Define Needs The key here is to understand which applications you want to control, and how much you want to consider collapsing functionality (IDS/IPS, web filtering, UTM) into the enterprise firewall. A few steps to consider here are: Create an oversight committee: We hate the term ‘committee’ to, but the reality is that an application aware firewall will impact activities across several groups. Clearly this is not just all about the security team, but also the network team and the application teams as well – at minimum, you will need to profile their applications. So it’s best to get someone from each of these teams (to whatever degree they exist in your organization) on the committee. Ensure they understand your objectives for the new enterprise firewall, and make sure it’s clear how their operations will change. Define the applications to control: Which applications do you need to control? You may not actually know this until you install one of these devices and see what visibility they provide into applications traversing the firewall. We’ll discuss phasing in your deployment, but you need to understand what degree of granularity you need from a blocking standpoint, as that will drive some aspects of selection. Determine management requirements: The deployment scenario will drive these. Do you need the console to manage the policies? To generate reports? For dashboards? The degree to which you need management help (if you have a third party tool, the answer should be: not much) will define a set of management requirements. Product versus managed service: Do you plan to use a managed service for either managing or monitoring the enterprise firewall? Have you selected a provider? The provider might define your short list before you even start. By the end of this phase you should have identified key stakeholders, convened a selection team, prioritized the applications to control, and determined management requirements. Formalize Requirements This phase can be performed by a smaller team working under the mandate of the selection committee. Here the generic needs determined in phase 1 are translated into specific technical features, and any additional requirements are considered. You can always refine these requirements as you proceed through the selection process and get a better feel for how the products work (and how effective and flexible they are at blocking applications). At the conclusion of this stage you will develop a formal RFI (Request For Information) to release to vendors, and a rough RFP (Request For Proposals) that you’ll clean up and formally issue in the evaluation phase. Evaluate Products Increasingly we see firewall vendors starting to talk about application awareness, new architectures, and very similar feature sets. The following steps should minimize your risk and help you feel confident in your final decision: Issue the RFI: Larger organizations should issue an RFI though established channels and contact a few leading enterprise firewall vendors directly. Though in reality virtually all the firewall players sell through the security channel, so it’s likely you will end up going through a VAR. Define the short list: Before bringing anyone in, match any materials from the vendor or other sources to your RFI and draft RFP. Your goal is to build a short list of 3 products which can satisfy most of your needs. You should also use outside research sources and product comparisons. Understand that you’ll likely need to compromise at some point in the process, as it’s unlikely any vendor can meet every requirement. Dog and Pony Show: Instead of generic presentations and demonstrations, ask the vendors to walk you through how they protect the specific applications you are worried about. This is critical, because the vendors are very good at showing cool eye candy and presenting a long list of generic supported applications. Don’t expect a full response to your draft RFP – these meetings are to help you better understand how each vendor can solve your specific use cases and to finalize your requirements. Finalize and issue your RFP: At this point you should completely understand your specific requirements, and issue a final formal RFP. Assess RFP responses and start proof of concept (PoC): Review the RFP results and drop anyone who doesn’t meet your hard requirements. Then bring in any remaining products for in-house testing. Given that it’s not advisable to pop holes in your perimeter when learning how to manage these devices, we suggest a layered approach. Test Ingress: First test your ingress connection by installing the new firewall in front of the existing perimeter gateway. Migrate your policies over, let the box run for a little while, and see what it’s blocking and what it’s not. Test Egress: Then move the firewall to the other side of the perimeter gateway, so it’s in position to do egress filtering on all your traffic. We suggest you monitor the traffic for a while to understand what is happening, and then define egress filtering policies. Understand that you need to devote resources to each PoC, and testing ingress separately from egress adds time to the process. But it’s not feasible to leave the perimeter unprotected while you figure out what works, so this approach gives you that protection and the ability to run the devices in pseudo-production mode. Selection and Deployment Select, negotiate, and buy: Finish testing, take the results to the full selection committee, and

Share:
Read Post

Understanding and Selecting an Enterprise Firewall: to UTM or Not to UTM?

Given how much time we’ve spent discusing application awareness and how these new capabilities pretty much stomp all over existing security products like IDS/IPS and web filters, does that mean standalone network security devices go away? Should you just quietly accept that unified threat management (UTM) is the way to go because the enterprise firewall provides multiple functions? Not exactly. First let’s talk about the rise of UTM, even in the enterprise. The drive towards UTM started with smaller businesses, where using a single device for firewall, IDS/IPS, anti-spam, web filtering, gateway AV, and other functions reduced complexity and cost – and thus made a lot of sense. But over time as device performance increased, it became feasible even for enterprises to consolidate functions into a single device. This doesn’t mean many enterprises tried this, but they had the option. So why hasn’t the large enterprise embraced UTM? It comes down to predictable factors we see impacting enterprise technology adoption in general: Branding: UTM was perceived as a SMB technology, so many enterprise snobs didn’t want anything to do with it. Why pay $2,500 for a box when you can pay $50,000 to make a statement about being one of the big boys? Of course, notwithstanding the category name, every vendor brought a multi-function security gateway to market. They realize ‘UTM’ could be a liability so they use different names for people who don’t want to use the same gear as the great unwashed. Performance Perception: Again, given the SMB heritage of UTM, enterprise network security players could easily paint UTM as low-performance, and customers believed them. To be clear, the UTM-centric vendors didn’t help here pushing their boxes into use cases where they couldn’t be successful, demonstrating they weren’t always suitable. If you try to do high-speed firewall, IDS/IPS, and anti-spam with thousands of rules, all in the same box, it’s not going to work well. Hell, even standalone devices use load balancing techniques to manage high volumes, but the perception of enterprise customers was that UTM couldn’t scale. And we all know that perception is reality. Single Point of Failure: If the box goes down you are owned, right? Well, yes – or completely dead in the water – you might get to choose which. Many enterprises remain unwilling to put all their eggs in one basket, even with high availability configurations and the like. As fans of layered security we don’t blame folks for thinking this way, but understand that you can deploy a set of multi-function gateways to address the issue. But when you are looking for excuses not to do something, you can always find at least one. Specialization: The complexity of large enterprise environments demands lots of resources, and they resources tend to be specialized in the operations of one specific device. So you’ll have a firewall jockey, an IDS/IPS guru, and an anti-spam queen. If you have all those capabilities in a single box, what does that do for the job security of all three? To be clear every UTM device supports role-based management so administrators can have control only over the functions in their area, but it’s easier for security folks to justify their existence if they have a dedicated box/function to manage. Yes, this boils down to politics, but we all know political machinations have killed more than a handful of emerging technologies. Pricing: There is no reason you can’t get a multi-function security device and use it as a standalone device. You can get a UTM and run it like a firewall. Really. But to date, the enterprise pricing of these UTM devices made that unattractive for most organizations. Again, a clear case of vendors not helping themselves. So we’d like to see more of a smorgasbord pricing model, where you buy the modules you need. Yes, some of the vendors (especially ones selling software on commodity hardware) are there. But their inclination is to nickel and dime the customer, charging too much for each module, so enterprises start to lose the idea that multi-function devices will actually save money. Ultimately these factors will not stop the multi-function security device juggernaut from continuing to collapse more functions into the perimeter gateway. Vendors changed the branding to avoid calling it UTM – even though it is. The devices have increased performance with new chips and updated architectures. And even the political stuff works out over time due to economic pressure to increase operational efficiency. So the conclusion we draw is that consolidation of network security functions is inevitable, even in the large enterprise. But we aren’t religious about UTM vs. standalone devices. All we care about is seeing the right set of security controls are implemented in the most effective means to protect critical information. We don’t expect standalone IDS/IPS devices to go away any time soon. And much of the content filtering (email and web) is moving to cloud-based services. We believe this is a very positive trend. These new abilities of the enterprise firewall give us more flexibility. That’s right, we still believe (strongly) in defense in depth. So having an IDS/IPS sitting behind an application aware firewall isn’t a bad thing. Attacks change every day and sometimes it’s best to look for a specific issue. Let’s use a battle analogy – if we have a sniper (in the form of IDS/IPS) sitting behind the moat (firewall) looking for a certain individual (the new attack), there is nothing wrong with that. If we want to provision some perimeter security in the cloud, and have a cleaner stream of traffic hitting your network, that’s all good. If you want to maintain separate devices at HQ and larger regional locations, while integrating functions in small offices and branches, or maybe even running network security in a virtual machine, you can. And that’s really the point. For a long time, we security folks have been building security architectures based on what the devices could do, not what’s appropriate (or necessary) to protect information assets. Having the ability to provision the security you need where you need

Share:
Read Post

Understanding and Selecting an Enterprise Firewall: Advanced Features, Part 2

After digging into application awareness features in Part 1, let’s talk about non-application capabilities. These new functions are really about dealing with today’s attacks. Historically, managing ports and protocols has sufficed to keep the bad guys outside the perimeter; but with today’s bumper crop of zombies & bots, the old ways don’t cut it any more. Bot Detection As law enforcement got much better at tracking attackers, the bad guys adapted by hiding behind armies of compromised machines. Better known as zombies or bots, these devices (nominally controlled by consumers) send spam, do reconnaissance, and launch other attacks. Due to their sophisticated command and control structures, it’s very difficult to map out these bot networks, and attacks can be launched from anywhere at any time. So how do we deal with this new kind of attacker on the enterprise firewall? Reputation: Reputation analysis was originally created to help fight spam, and is rapidly being adopted in the broader network security context. We know some proportion of the devices out there are doing bad things, and we know many of those IP addresses. Yes, they are likely compromised devices (as opposed to owned by bad folks specifically for nefarious purposes) but regardless, they are doing bad things. You can check a reputation service in real time and either block or take other actions on traffic originating from recognized bad actors. This is primarily a black list, though some companies track ‘good’ IPs as well, which allows them to take a cautious stance on devices not known to be either good or bad. Traffic Analysis: Another technique we are seeing on firewall is the addition of traffic analysis. Network behavioral analysis didn’t really make it as a standalone capability, but tracking network flows across the firewall (with origin, destination, and protocol information) allows you to build a baseline of acceptable traffic patterns and highlight abnormal activity. You can also set alerts on specific traffic patterns associated with command and control (bot) communications, and so use such a firewall as an IDS/IPS. Are these two capabilities critical right now? Given the prevalence of other mechanisms to detect these attacks – such as flow analysis through SIEM and pattern matching via network IDS – this is a nice-to-have capability. But we expect a lot of these capabilities to centralize on application aware firewalls, positioning these devices as the perimeter security gateway. As such, we expect these capabilities to become more prevalent over the next 2 years, and in the process make the bot detection specialists acquisition targets. Content Inspection It’s funny, but lots of vendors are using the term ‘DLP’ to describe how they analyze content within the firewall. I know Rich loves that, and to be clear, firewall vendors are not* performing Data Leak Prevention. Not the way we define it, anyway. At best, it’s content analysis a bit more sophisticated than regular expression scanning. There are no capabilities to protect data at rest or in use, and their algorithms for deep content analysis are immature when they exist at all. So we are pretty skeptical on the level of real content inspection you can get from a firewall. If you are just looking to make sure social security numbers or account IDs don’t leave the perimeter through email or web traffic, a sophisticated firewall can do that. But don’t expect to protect your intellectual property with sophisticated analysis algorithms. When firewall vendors start saying bragging on ‘DLP’, you have our permission to beat them like dogs. That said, clearly there are opportunities for better integration between real DLP solutions and the enterprise firewall, which can provide an additional layer of enforcement. We also expect to see maturation of inspection algorithms available on firewalls, which could supplant the current DLP network gateways – particularly in smaller locations where multiple devices can be problematic. Vulnerability Integration One of the more interesting integrations we see is the ability for a web application scanner or service to find an issue and set a blocking rule directly on the web application firewall. This is not a long-term fix but does buy time to investigating a potential application flaw, and provides breathing room to choose the most appropriate remediation approach. Some vendors refer to this as virtual patching. Whatever it’s called, we think it’s interesting. So we expect the same kind of capability to show up on general purpose enterprise firewalls. You’d expect the vulnerability scanning vendors to lead the way on this integration, but regardless, it will make for an interesting capability of the application aware firewall. Especially if you broaden your thinking beyond general network/system scanners. A database scan would likely yield some interesting holes which could be addressed with an application blocking rule at the firewall, no? There are numerous intriguing possibilities, and of course there is always a risk of over-automating (SkyNet, anyone?), but the additional capabilities are likely worth the complexity risk. Next we’ll address the question we’ve been dancing around throughout the series. Is there a difference between an application aware firewall and a UTM (unified threat management) device? Stay tuned… Share:

Share:
Read Post

Incite 9/15/2010: Up, down, up, down, Repeat

It was an eventful weekend at chez Rothman. The twins (XX2 and XY) had a birthday, which meant the in-laws were in town and for the first time we had separate parties for the kids. That meant one party on Saturday night and another Sunday afternoon. We had a ton of work to do to get the house ready to entertain a bunch of rambunctious 7 year olds. But that’s not all – we also had a soccer game and tryouts for the holiday dance performance on Saturday. And that wasn’t it. It was the first weekend of the NFL season. I’ve been waiting intently since February for football to start again, and I had to balance all this activity with my strong desire to sit on my ass and watch football. As I mentioned last week, I’m trying to be present and enjoy what I’m doing now – so this weekend was a good challenge. I’m happy to say the weekend was great. Friday and Saturday were intense. Lots of running around and the associated stress, but it all went without a hitch. Well, almost. Any time you get a bunch of girls together (regardless of how old they are), drama cannot be far off. So we had a bit, but nothing unmanageable. The girls had a great time and that’s what’s important. We are gluttons for punishment, so we had 4 girls sleep over. So I had to get donuts in the AM and then deliver the kids to Sunday school. Then I could take a breath, grab a workout, and finally sit on my ass and watch the first half of the early NFL games. When it was time for the party to start, I set the DVR to record the rest of the game, resisted the temptation to check the scores, and had a good time with the boys. When everyone left, I kicked back and settled in to watch the games. I was flying high. Then the Falcons lost in OT. Crash. Huge bummer. Kind of balanced out by the Giants winning. So I had a win and a loss. I could deal. Then the late games started. I picked San Francisco in my knock-out pool, which means if I get a game wrong, I’m out. Of course, Seattle kicked the crap out of SFO and I’m out in week 1. Kind of like being the first one voted off the island in Survivor. Why bother? I should have just set the Jackson on fire, which would have been more satisfying. I didn’t have time to sulk because we went out to dinner with the entire family. I got past the losses and was able to enjoy dinner. Then we got back and watched the 8pm game with my in-laws, who are big Redskin fans. Dallas ended up losing, so that was a little cherry on top. As I look back on the day, I realize it’s really a microcosm of life. You are up. You are down. You are up again and then you are down again. Whatever you feel, it will soon pass. As long as I’m not down for too long, it’s all good. It helps me appreciate when things are good. And I’ll keep riding the waves of life and trying my damnedest to enjoy the ups. And the downs. – Mike. Photo credits: “Up is more dirty than down” originally uploaded by James Cridland Recent Securosis Posts As you can tell, we’ve been pretty busy over the past week, and Rich is just getting ramped back up. Yes, we have a number of ongoing research projects and another starting later this week. We know keeping up with everything is like drinking from a fire hose, and we always appreciate the feedback and comments on our research. HP Sets Its ArcSights on Security FireStarter: Automating Secure Software Development Friday Summary: September 10, 2010 White Paper Released: Data Encryption 101 for PCI DLP Selection Process, Step 1 Understanding and Selecting an Enterprise Firewall Management Deployment Considerations Technical Architecture, Part 2 Technical Architecture, Part 1 NSO Quant Monitor Metrics – Collect and Store Monitor Metrics – Define Policies Monitor Metrics – Enumerate and Scope LiquidMatrix Security Briefing: September 13 September 9 September 8 Incite 4 U Here you have… a time machine – The big news last week was the Here You Have worm, which compromised large organizations such as NASA, Comcast, and Disney. It was a good old-fashioned mass mailing virus. Wow! Haven’t seen one of those in many years. Hopefully your company didn’t get hammered, but it does remind us that what’s old inevitably comes back again. It also goes to show that users will shoot themselves in the foot, every time. So what do we do? Get back to basics, folks. Endpoint security, check. Security awareness training, check. Maybe it’s time to think about more draconian lockdown of PCs (with something like application white listing). If you didn’t get nailed consider yourself lucky, but don’t get complacent. Given the success of Here You Have, it’s just a matter of time before we get back to the future with more old school attacks. – MR Cyber-Something – A couple of the CISOs at the OWASP conference ducked out because their networks had been compromised by a worm. The “Here You Have” worm was being reported and it infected more than half the desktops at one firm; in another case it just crashed the mail server. But this whole situation ticks me off. Besides wanting to smack the person who came up with the term “Cyber-Jihad” – as I suspect this is nothing more than an international script-kiddie – I don’t like that we have moved focus off the important issue. After reviewing the McAfee blog, it seems that propagation is purely due to people clicking on email links that download malware. So WTF? Why is the focus on ‘Cyber-Jihad’? Rather than “Ooh, look at the Cyber-monkey!” how about “How the heck did the email scanner not catch this?” Why wasn’t the reputation of

Share:
Read Post

Understanding and Selecting an Enterprise Firewall: Advanced Features, Part 1

Since our main contention in the Understanding and Selecting an Enterprise Firewall series is the movement toward application aware firewalls, it makes sense to dig a bit deeper into the technology that will make this happen and the major uses for these capabilities. With an understanding of what to look for, you should be in a better position to judge whether a vendor’s application awareness capabilities will match your requirements. Application Visibility In the first of our application awareness posts, we talked about visibility as one of the key use cases for application aware firewalls. What exactly does that mean? We’ll break this up into the following buckets: Eye Candy: Most security folks don’t care about fancy charts and graphs, but senior management loves them. What CFO doesn’t turn to jello at the first sign of a colorful pie chart? The ability to see application usage and traffic, and who is consuming bandwidth over a long period over time, provides huge value in understanding normal behavior on your network. Look for granularity and flexibility in these application-oriented visuals. Top 10 lists are a given, but be sure you can slice the data the way you need – or at least export to a tool that can. Having the data is nice; being able to use it is better. Alerting: The trending capabilities of application traffic analysis allows you to set alerts to fire when abnormal behavior appears. Given the infinite attack surface we must protect, any help you can get pinpointing and prioritizing investigative resources increases efficiency. Be sure to have sufficient knobs and dials to set appropriate alerts. You’d like to be able to alert on applications, user/group behavior in specific applications, and possibly even payload in the packets (through regular expression type analysis), and any combination therein. Obviously the more flexibility you have in setting application alerts and tightening thresholds, the better you’ll be able to cut the noise. This sounds very similar to managing an IDS, but we’ll get to that later. Also make sure setting lots of application rules won’t kill performance. Dropped packets are a lousy trade-off for application alerts. One challenge of using a traditional firewall is the interface. Unless the user experience has been rebuilt around an application context (what folks are doing), it still feels like everything is ports and protocols (how they are doing it). Clearly the further you can abstract network behavior to application behavior, the more applicable (and understandable) your rules will be. Application Blocking Visibility is the first step, but you also want to be able to block certain applications, users, and content activities. We told you this was very similar to the IPS concept – the difference is in how detection works. The IDS/IPS uses a negative security model (matching patterns to identify bad stuff) to fire rules, while application aware firewalls use a positive security model – they determine what application traffic is authorized, and block everything else. Extending this IPS discussion a bit, we see most organizations using blocking on only a small minority of the rules/signatures on the box, usually less than 10%. This is for obvious reasons (primarily because blocking legitimate traffic is frowned upon), and gets back to a fundamental tenet of IPS which also applies to application aware firewalls. Just because you can block, doesn’t mean you should. Of course, a positive security model means you are defining what is acceptable and blocking everything else, but be careful here. Most security organizations aren’t in the loop on everything that is happening (we know – quite a shocker), so you may inadvertently stymie a new/updated application because the firewall doesn’t allow it. To be clear, from a security standpoint that’s a great thing. You want to be able to vet each application before it goes live, but politically that might not work out. You’ll need to gauge your own ability to get away with this. Aside from the IPS analogy, there is also a very clear white-listing analogy to blocking application traffic. One of the issues with application white-listing on the endpoints is the challenge of getting applications classified correctly and providing a clear workflow mechanism to deal with exceptions. The same issues apply to application blocking. First you need to ensure the application profiles are accurate and up-to-date. Second, you need a process to allow traffic to be accepted, balancing the need to protect infrastructure and information against responsiveness to business needs. Yeah, this is non-trivial, which is why blocking is done on a fraction of application traffic. Overlap with Existing Web Security Think about the increasing functionality of your operating system or your office suite. Basically, the big behemoth squashed a whole bunch of third party utilities that added value by bundling such capabilities into each new release. The same thing is happening here. If you look at the typical capabilities of your web application filter, there isn’t a lot that can’t be done by an application aware firewall. Visibility? Check. Employee control/management? Check. URL blocking, heuristics, script analysis, AV? Check, check, check, check. The standalone web filter is an endangered species – which, given the complexity of the perimeter, isn’t a bad thing. Simplifying is good. Moreover, a lot of folks are doing web filtering in the cloud now, so the movement from on-premises web filters was under way anyway. Of course, no entrenched device gets replaced overnight, but the long slide towards standalone web filter oblivion has begun. As you look at application aware firewalls, you may be able to displace an existing device (or eliminate the maintenance renewal) to justify the cost of the new gear. Clearly going after the web filtering budget makes sense, and the more expense neutral you can make any purchase, the better. What about web application firewalls? To date, these categories have been separate with less clear overlap. The WAF’s ability to profile and learn about application behavior – in terms of parameter validation, session management, flow analysis, etc. – aren’t available on application aware firewalls. For now. But let’s be clear, it’s not a

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.