Securosis

Research

NSO Quant: The Report and Metrics Model

It has been a long slog, but the final report on the Network Security Operations (NSO) Quant research project has been published. We are also releasing the raw data we collected in the survey at this point. The main report includes: Background material, assumptions, and research process overview Complete process framework for Monitoring (firewalls, IDS/IPS, & servers) Complete process framework for Managing (firewalls & IDS/IPS) Complete process framework for maintaining Device Health The detailed metrics which correlate with each process framework Identification of key metrics How to use the model Additionally, you can download and play around with the spreadsheet version of the metrics model. In the spreadsheet, you can enter your specific roles and headcount costs, and estimate the time required for each task, to figure out your own costs. In terms of the survey, as of October 22, 2010 we had 80 responses. The demographics were pretty broad (from under 5 employees to over 400,000), but we believe the data validates some of the conclusions we reached through our primary research. Click here for the full, raw survey results. The file includes a summary report and the full raw survey data (anonymized where needed) in .xls format. With the exception of the raw survey results, we have linked to the landing pages for all the documents, because that’s where we will be putting updates and supplemental material (hopefully you aren’t annoyed by having to click an extra time to see the report). The material is being released under a Creative Commons license. Thanks again to SecureWorks for sponsoring this research. Share:

Share:
Read Post

Friday Summary: October 22, 2010

Facebook is for old people. Facebook will ultimately make us more secure. I have learned these two important lessons over the last few weeks. Saying Facebook is for old people is not like saying it’s dead – far from it. But every time I talk computers with people 10-15 years older than me, all they do is talk about Facebook. They love it! They can’t believe they found high school acquaintances they have not seen for 30+ years. They love the convenience of keeping tabs on family and friends from their Facebook page. They are amazed to find relatives who have been out of touch for decades. It’s their favorite web site by far. And they are shocked that I don’t use it. Obviously I will want to once I understand it, so they all insist on telling me about all the great things I could do with Facebook and the wonderful things I am missing. They even give me that look, like I am a complete computer neophyte. One said “I thought you were into computers?” Any conversation about security and privacy went in one ear and out the other because, as I have been told, Facebook is awesome. As it always does, this thread eventually leads to the “My computer is really slow!” and “I think I have a virus, what should I do?” conversations. Back when I had the patience to help people out, a quick check of the machine would not uncover a virus. I never got past the dozen quasi-malicious browser plug-ins, PR-ware tracking scripts sucking up 40% of system resources, or nasty pieces of malware that refused to be uninstalled. Nowdays I tell them to stop visiting every risky site, stop installing all this “free” crap, and for effing sake, stop clicking on email links that supposedly come from your bank or Facebook friends! I think I got some of them to stop clicking email links from their banks. They are, after all, concerned about security. Facebook is a different story – they would rather throw the machine out than change their Facebook habits because, sheesh, why else use the computer? I am starting to notice an increase in computer security awareness from the general public. Actually, the extent of their awareness is that a lot of them have been hacked. The local people I talk to on a regular basis tell me they and all their children, have had Facebook and Twitter accounts hacked. It slowed them down for a bit, but they were thankful to get their accounts back. And being newly interested in security, they changed their passwords to ‘12345’ to ensure they will be safe in the future. Listening to the radio last week, two of the DJs had their Twitter accounts stolen. One DJ had a password that was his favorite team name concatenated with the number of his favorite player. He was begging over the air for the ‘hacker’ to return his access so he could tweet about the ongoing National League series. Social media are a big part of their personal and professional lives and, dammit, someone was messing with them! One of my biggest surprises in Average Joe computer security was seeing Hammacher Schlemmer offer an “online purchase security system”. Yep, it’s a little credit card mag stripe reader with a USB cable. Supposedly it encrypts data before it reaches your computer. I certainly wonder exactly whose public key it might be encrypting with! Actually, I wonder if the device does what it says it does – or anything at all! I am certain Hammacher Schlemmer sells more Harry Potter wands, knock-off Faberge eggs, and doggie step-up ladders than they do credit card security systems, but clearly they believe there is a market for this type of device. I wonder how many people will see these in their in-flight Sky Mall magazines over the holidays and order a couple for the family. Even for aunt Margie in Minnesota, so she can safely send electronic gift cards to all the relatives she found on Facebook. Now that she regained access to her account and set a new password. And that’s how Facebook will improve security for everyone. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Tech Target article on Database Auditing. Adrian’s technical tips on setting up database auditing. Rich at RSA 2010 China. Favorite Securosis Posts Mike Rothman: Monitoring up the Stack: Climbing the Stack. Then end of the MUTS series provides actionable information on where to start extending your monitoring environment. Adrian Lane: Vaults within Vaults. Other Securosis Posts React Faster and Better: Data Collection/Monitoring Infrastructure. White Paper Goodness: Understanding and Selecting an Enterprise Firewall. Incite 10/20/2010: The Wrongness of Being Right. React Faster and Better: Introduction. New Blog Series: React Faster and Better. Monitoring up the Stack: Platform Considerations. Favorite Outside Posts Mike Rothman: Reconcile This. Gunnar calls out the hypocrisy of what security folks focus on – it’s great. The bad guys are one thing, but our greatest adversary is probably inertia. Gunnar Peterson: Tidal Wave of Java Exploitation. Adrian Lane: Geek Day at the White House. Chris Pepper: WTF? Apple deprecates Java. Actuallly they’re dropping the Apple JVM as of 10.7, but do you expect Oracle to build and maintain a high-quality JVM for Mac OS X? A lot of Mac-toting Java developers are looking at each other quizzically today. Project Quant Posts NSO Quant: Index of Posts. NSO Quant: Health Metrics – Device Health. NSO Quant: Manage Metrics – Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics – Deploy and Audit/Validate. NSO Quant: Manage Metrics – Process Change Request and Test/Approve. Research Reports and Presentations Understanding and Selecting a DLP Solution. White Paper: Understanding and Selecting an Enterprise Firewall. Understanding and Selecting a Tokenization Solution. Security + Agile = FAIL Presentation. Data Encryption 101: A Pragmatic Approach to PCI. White Paper: Understanding and Selecting SIEM/Log Management. White Paper: Endpoint Security Fundamentals. Top News and Posts A boatload of Oracle fixes. Judge Clears CAPTCHA-Breaking Case for Criminal Trial Data theft overtakes physical loss. Malware pushers abuse Firefox warning page. Predator

Share:
Read Post

Can we ever break IT?

I was reading one of RSnake’s posts on how our security devolves to the lowest common denominator because we can’t break IT – which means we can’t make changes to systems, applications, and endpoints in order to protect them. He was talking specifically about the browser, but it got me thinking a bit bigger: when/if it’s OK to break IT. To clarify, by breaking IT, I mean changing the user experience adversely in some way to more effectively protect critical data/information. I’ll get back to a concept I’ve been harping on the last few weeks: the need to understand what applications & data are most important to your organization. If the data is that important to your business, then you need to be able to break IT in order to protect it. Right? Take the next step: this means there probably should be a class of users who have devices that need to be locked down. Those users have sensitive information on those devices, and if they want to have that data, then they need to understand they won’t be able to do whatever they want on their devices. They can always choose not to have that data (so they can visit pr0n sites and all), but is it unreasonable to want to lock down those devices? And actually be able to do it? There are other users who don’t have access to much, so locking down their devices wouldn’t yield much value. Sure, the devices could be compromised and turned into bots, but you have other defenses to address that, right? But back to RSnake’s point: we have always been forced to accept the lowest common denominator from a security standpoint. That’s mostly because security is not perceived as adding value to the business, and so gets done as quickly and cheaply as possible. Your organization has very little incentive to be more secure, so they aren’t. Your compliance mandate du jour also forces us toward the lowest common denominator box. Love it or hate it, PCI represents that low bar now. Actually, if you ask most folks who don’t do security for a living (and probably a shocking number who do), they’ll tell you that being PCI compliant represents a good level of security. Of course we know better, but they don’t. So we are forced to make a serious case to go beyond what is perceived to be adequate security. Most won’t and don’t, and there it ends. So RSnake and the rest of us can gripe about the fact that we aren’t allowed to break much of anything to protect it, but that’s as much our problem as anything else. We don’t make the case effectively enough that the added protection we’ll get from breaking the user experience is worth it. Until we can substantiate this we’ll remain in the same boat. Leaky as it may be. Share:

Share:
Read Post

Everything You Ever Wanted to Know about DLP

Way back when I converted Securosis from a blog into a company, my very first paper was (no surprise) Understanding and Selecting a DLP Solution. Three or so years later I worried it was getting a little long in the tooth, even though the content was all still pretty accurate. So, as you may have noticed from recent posts, I decided to update and expand the content for a new version of the paper. Version 1.0 is still downloaded on pretty much a daily basis (actually, sometimes a few hundred times a month). The biggest areas of expansion were a revamped selection process (with workflow, criteria, and a selection worksheet) and more details on “DLP features” and “DLP Light” tools that don’t fit the full-solution description. This really encapsulates everything you should need to know up through acquiring a DLP solution, but since it’s already 50+ pages I decided to hold off on implementation until the next paper (besides, that gives me a chance to scrum up some extra cash to feed the new kid). I did, however, also break out just the selection worksheet for those of you who don’t need the entire paper. Not that it will make any sense without the paper. The landing page is here: Understanding and Selecting a DLP Solution. Direct download is at: Whitepaper (PDF) Very special thanks to Websense for licensing the paper and worksheet. They were the very first sponsor of my first paper, which helped me show my wife we wouldn’t lose the house because I quit my job to blog. Share:

Share:
Read Post

Incident Response Fundamentals: Data Collection/Monitoring Infrastructure

In Incident Response Fundamentals: Introduction we talked about the philosophical underpinnings of our approach and how you need to look at stuff before, during, and after an attack. Regardless of where in the attack lifecycle you end up, there is a common requirement: for data. As we mentioned, you only get one opportunity to capture the data, and then it’s gone. So in order to react faster and better in your environment, you will need lots of data. So how and where do you collect it? In theory, we say get everything you can and worry about how useful it is later. Obviously that’s not exactly practical in most environments. So you need to prioritize the data requirements based upon the most likely attack vectors. Yes, we’re talking risk management here, but it’s okay. A little prioritization based on risk won’t kill you. Collect What? The good news is there is no lack of data to capture – let’s list some of the biggest buckets you have to parse: Events/Logs: It’s obvious but we still have to mention it. Event logs tell you what happened and when; they provide the context for many other data types, as well as validation for attacks. Database activity: Database audit logs provide one aspect of application activity, but understanding the queries and how that relates to specific attacks is very helpful to profiling normal behavior and thus understanding what isn’t normal. Application data: Likewise, application data beyond the logs – including transactions, geo-location, etc. – provides better visibility into the context of application usage and pinpoint potential fraudulent activity. We discussed both Database activity monitoring and application monitoring in detail in the Monitoring up the Stack series. Network flow: Understanding which devices are communicating with which others enables pattern analysis to pinpoint strange network activity – which might represent an attack exfiltrating data or reconnaissance activity. Email: One of the most significant data leakage vectors is email (even if it is not always malicious), so it needs to be monitored and collected. Web traffic: Like email, web traffic data drive alerts and provide useful information for forensics. Configuration data: Most malware makes some kind of changes to the devices, so by collecting and monitoring device configurations those changes can be isolated and checked against policy to quickly detect an outbreak. Identity: Finally, an IP address is usable, but being able to map that back to a specific user and then track that user’s activity within your environment is much more powerful. We have dug pretty deeply into all these topics in our Understanding and Selecting a SIEM/Log Management research report, as well as the Monitoring Up the Stack series. Check out those resources for a deeper dive. Collect How? Now that you know what you want to do, how are you going to collect it? That depends a lot on the data. Most organizations have some mix of the following classes of devices: SIEM/Log Management Database Activity Monitoring Fraud detection CMDB (configuration management database) Network Behavioral Analysis Full Network Packet Capture So there are plenty of tools you can use, depending on what you want to collect. Again, we generally tend to want to capture as much data as possible, which is why we like the idea of full network packet capture for as many segments as possible. Mike wrote a thought piece this week about Vaults within Vaults, and part of that approach is a heavy dose of network segmentation along with capturing network traffic on the most sensitive segments. But capturing the data is only the first step in a journey of many miles. Then you have to aggregate, normalize, analyze, and alert on that data across data types to get real value. As mentioned above, we have already published a lot of information about SIEM/Log Management and Database Activity Monitoring, so check out those resources for more detail. As we dig into what has to happen at each period within the attack lifecycle, we’ll delve into how best to use the data you are collecting at that point in the process. But we don’t want to get ahead of ourselves – the first aspect of Incident Response is to make sure you are organized appropriately to respond to an incident. Our next post will focus on that. Share:

Share:
Read Post

Incite 10/20/2010: The Wrongness of Being Right

One of my favorite sayings is “Don’t ask the question if you don’t want the answer.” Of course, when I say answer, what I really mean is opinion. It makes no difference what we are talking about, I probably have an opinion. In fact, a big part of my job is to have opinions and share them with however will listen (and even some who won’t). But to have opinions means you need to judge. I like to think I have a finely tuned bullshit detector. I’ve been having vendors lie to me since I got into this business 18 years ago. A lot of end users can be delusional about their true situations as well. So that means I’m judging what’s happening around me at all times, and I tell then what I think. Even if they don’t want to hear my version of the truth. Sometimes I make snap judgements; other times I only take a position after considerable thought and research. I’m trying to determine if something is right or wrong, based on all the information I can gather at that point in time. But I have come to understand that right and wrong is nothing more than another opinion. What is right for you may be wrong for me. Or vice-versa. It took me a long long time to figure that out. Most folks still don’t get this. I can recall when I was first exposed to the Myers-Briggs test, when I stumbled on a book that included i. Taking that test was very enlightening for me. Turns out I’m an INTJ, which means I build systems, can be blunt and socially awkward (go figure), and tend to judge everything. Folks like me make up about 1% of the population (though probably a bit higher in tech and in the executive suite). I knew I was different ever since my kindergarten teacher tried to hold me back in kindergarten (true story), but I never really understood why. Even if you buy into the idea there are just 16 personality types, clearly there is a spectrum across each of the 4 characteristics. In my black/white world, there seems to be a lot of color. Who knew? This train of thought was triggered by a tweet by my pal Shack, basically calling BS on one guy’s piece on the value of not trying to be successful. That’s not how Dave rolls. And that’s fine. Dave is one guy. The dude writing the post is another. What works for that guy clearly would n’t work for Dave. What works for me doesn’t work for you. But what we can’t do is judge it as right or wrong. It’s not my place to tell some guy he needs to strive for success. Nor is it my place to tell the Boss not to get upset about something someone said about something. I would like her not to get upset because when she’s upset it affects me, and it’s all about me. But if she decides to get upset, that’s what she thinks is the right thing to do. To make this all high concept, a large part of our social problems boil down to one individual’s need to apply their own concept of right to another. Whether it’s religion or politics or parenting or values or anything, everyone thinks they are right. So they ridicule and persecute those who disagree. I’m all for intelligent discord, but at some point you realize you aren’t going to get to common ground. Not without firearms. Trying to shove your right into my backside won’t work very well. The next time someone does something you think is just wrong, take a step back. Try to put yourself in their shoes and see if there is some way you can rationalize the fact they think it’s right. Maybe you can see it, maybe you can’t. But unless that decision puts you in danger, you just need to let it go. Right? Glad you decided to see it my way (the right way). – Mike Photo credits: “wrong way/right way” originally uploaded by undergroundbastard Recent Securosis Posts Vaults within Vaults React Faster and Better: Introduction Monitoring Up the Stack series Platform Considerations Climbing the Stack Dead or Alive: Pen Testing Incite 4 U Verify your plumbing (or end up in brown water) – Daniel Cox of BreakingPoint busts devices for a living. So it’s interesting to read some of his perspectives on what you need to know about your networking gear. Remember, no network, no applications. So if your network is brittle, then your business will be brittle. Spoken by a true plumber, no? There is good stuff there, like understanding what happens during a power cycle and the logging characteristics of the device. The one I like best is #5: Do Not Believe Your Vendor. That’s great advice for any type of purchase. The vendor’s job is to sell you. Your job is to solve a problem. Rarely the twain shall meet, so verify all claims. But only if you want to keep your job, because folks covered in the brown stuff tend to get jettisoned quickly. – MR It’s new, and it’s old – Adam Shostack’s post Re-architecting the Internet poses a valid question: if we were to build a new Internet from scratch, would it be more secure? I think I have argued both sides of the “need a new Internet” debate at one time or another. Now I am kind of non-plussed on the whole discussion because I believe there won’t be a new Internet, and there won’t be a single Internet. We need to change what we do, but we don’t need a new Internet to do it. There is no reason we cannot continue to use the physical Internet we have and just virtualize the presentation. Much as a virtual server will leverage whatever hardware it has to run different virtual machines, there is no reason we can’t have different virtual Internets running over the same physical infrastructure. We have learned from information centric security that we can encapsulate information

Share:
Read Post

White Paper Goodness: Understanding and Selecting an Enterprise Firewall

What? A research report on enterprise firewalls. Really? Most folks figure firewalls have evolved about as much over the last 5 years as ant traps. They’re wrong, of course, but people think of firewalls as old, static, and generally uninteresting. But this is unfounded. Firewalls continue to evolve, and their new capabilities can and should impact your perimeter architecture and firewall selection process. That doesn’t mean we will be advocating yet another rip and replace job at the perimeter (sorry, vendors), but there are definitely new capabilities that warrant consideration – especially as the maintenance renewals on your existing gear come due. We have written a fairly comprehensive paper that delves into how the enterprise firewall is evolving, the technology itself, how to deploy it, and ultimately how to select it. We assembled this paper from the Understand and Selecting an Enterprise Firewall blog series from August and September 2010. Special thanks to Palo Alto Networks for sponsoring the research. You can check out the page in the research library, or download directly: Understanding and Selecting an Enterprise Firewall Share:

Share:
Read Post

Vaults within Vaults

My session for the Atlanta BSides conference was about what I expected in 2011. I might as well have thrown a dart at the wall. But the exercise got me thinking about the newest attacks (like Stuxnet) and the realization of how state-sponsored attackers have penetrated our networks with impunity. Clearly we have to shake up the status quo in order to keep up. This is a point I hit on in last week’s Incite, when discussing Greg Shipley’s post on being outgunned. Obviously what we are doing now isn’t working, and if anything the likelihood of traditional controls such as perimeter defense and anti-malware agents protecting much of anything decreases with every application moved up to the cloud and each trading partner allowed into your corporate network. The long-term answer is to protect the fundamental element of data. Rich and Adrian (with an assist from Gunnar) are all over that. As what we used to call applications continue to decompose into data, logic, processing, and presentation we have neither control over nor visibility into the data at most points in the cycle. So we are screwed unless we can figure out some way to protect the data regardless of how, where, or by whom it’s going to be used. But that is going to be a long, long, long, long slog. We don’t even know how to think about tackling the problem, so solutions are probably a decade away, and that’s being optimistic. Unfortunately that’s the wrong answer, because we have the problem now and need to start thinking about what to do. Did I mention we need answers now? Since I’m the plumber, I took a look into my tool bag and started thinking about what we could do within the constraints of our existing infrastructure, political capital, and knowledge to give us a better chance. This was compounded by the recent disagreement Adrian and I had about how much monitoring is necessary (and feasible) driven by Kindervag’s ideas on Zero Trust. I always seem to come back to the idea of not a disappearing perimeter, but multiple perimeters. Sorry Jerichonians, but the answer is more effectively segmenting networks with increasingly stringent controls based on the type and sensitivity of data within that domain. Right, this is not a new idea. It’s the idea of trust zones based on type of data. The military has been doing this for years. OK, maybe it isn’t such a great idea… Yes, I’m kidding. Many folks will say this doesn’t work. It’s just the typical defense in depth rhetoric, which says you need everything you already have, plus this new shiny object to stop the new attack. The problem isn’t with the architecture, it’s with the implementation. We don’t compartmentalize – not even if PCI says to. We run into far too many organizations with flat networks. From a network ops standpoint, flat networks are certainly a lot easier to deal with than trying to segment networks based on what data can be accessed. But flat networks don’t provide the hierarchy necessary to protect what’s important, and we have to understand that we don’t have the money (or resources) to protect everything. And realize that not everything needs to be protected with the same level of control. OK Smart Guy, How? Metaphorically, think about each level of segmented network as a vault. As you climb the stack of data importance, you tighten the controls and make it harder to get to the data (and theoretically harder to compromise), basically implementing another vault within the first. So an attacker going after the crown jewels needs to do more than compromise a vulnerable Windows 2000 Server that someone forgot about to see the targeted assets. Here’s how we do it: Figure out what’s important: Yes, I’ve been talking about this for years (this is the first step of the Pragmatic CSO). Find the important data: This is the discover step from all the Pragmatic Data Security research we’ve done. Assess the data’s importance: This gets back to prioritization and value. How much you can and should spend on protecting the data needs to correlate to how valuable it is, right? You should probably look at 3-4 different levels of data importance/value. Re-architect your network: This means working with the network guys to figure out how to segment your networks to cordon off each level of increasingly sensitive data. Add controls: Your existing perimeter defenses are probably fine for the first layer. Then you need to figure out what kind of controls are appropriate for each layer. More on Controls Again, the idea of layered controls is old and probably a bit tired. You don’t want a single point of failure. No kidding? But here I’m talking about figuring out what controls are necessary to protect the data, depending on its sensitivity. For example, maybe you have a call center and those folks have access to private data. Obviously you want that behind more than just the first perimeter, but the reality is that most of the risk is from self-inflicted injury. You know, a rep sending data out inadvertently. Sounds like a situation where DLP would be appropriate. Next you have some kind of transactional system that drives your business. For the layer, you monitor database and application activity. Finally you have intellectual property that is the underpinning of your business. This is the most sensitive stuff you have. So it makes sense to lock it down as tightly as possible. Any devices on this network segment are locked down using application whitelisting. You also probably want to implement full network packet capture, so you know exactly what is happening and can watch for naughty behavior. I’m making this up, but hopefully the idea of implementing different (and more stringent) controls in each network segment makes sense. None of this is new. As I’m starting to think about my 2011 research agenda, I like this idea of vaults (driven by network segmentation) as a metaphor for infrastructure security. But this isn’t just my show. I’m interested in whether you all think there is

Share:
Read Post

Monitoring up the Stack: Climbing the Stack

As we have discussed through this series, monitoring additional data types can extend the capabilities of SIEM in a number of different ways. But you have lots of options for which direction to go. So the real question is: where do you start? Clearly you are not going to start monitoring all of these data types at once, particularly because most forms require some integration work on your part – often a great deal. Honestly, there are no hard and fast answers on where to start, or what type of monitoring is most important. Those decisions must be based on your specific requirements and objectives. But we can describe a couple common approaches for climbing the monitoring stack. Get more from SIEM The first path we’ll describe involves organizations simply looking to do more with what they have, squeezing additional value from the SIEM system they already own. They start by collecting data on the existing monitoring systems already in place, where they already have the data or the ability to easily get it. From there they add capabilities in order, from easiest to hardest. Usually that means file integrity monitoring first. From the standpoint of additional monitoring capabilities, file integrity is a bit of a standalone feature, but critical because most attacks have some impact on critical system files and so can be detected by monitoring file integrity. Next comes identity monitoring – most SIEM platforms coordinate with server/desktop operations management systems, so this capability is relatively straightforward to add. Why do this? Identity monitoring systems include audit capabilities which provide events to SIEM in order to audit access control system activity, and to map local events back to domain identities. From there it’s a logical progression to add to user activity monitoring. You leverage the combination of SIEM functions and identity monitoring data against a bunch of new rules and dashboards implemented to track user activity. As sophistication increases, 3rd party web security, endpoint agents, and content analysis tools can provide additional data to fill out a comprehensive view of user activity. Once those activities are mastered, these organizations tackle database and application monitoring. These two data types overlap less in terms of analysis and data collection techniques, provide more specialized analysis, and address detection of a different class of attack. Their implementations also tend to be the most resource intensive, so without a specific catalyst to drive implementation they tend to fall to the bottom of the list. Responding to Threats In the second post in this series, we outlined many of the threats that prompt IT organizations to consider monitoring: malware, SQL injection, and other types of system misuse. If managing these threats is the catalyst to extend your monitoring infrastructure, the progression of what data types to add will depend entirely on which attacks you need address. If you’re interested in stopping web attacks, you’ll likely start with application monitoring, followed by database activity and identity monitoring. Malware detection will drive you towards file integrity monitoring initially, and then probably to identity and user activity monitoring, because bad behavior on behalf of users can indicate a malware outbreak. If you want to detect botnets, user activity monitoring and identity monitoring are a good start. Your data type priorities will be driven by what you want to detect, based on the greatest risk you perceive to your organization. Though it’s a bit beyond the scope of this research project, we are big fans of threat modeling because it provides structure for what you need to worry about and how to defend against it. With a threat model – even on the back of an envelope – you can map the threats to information your SIEM already provides, and then decide which supplementary add-on functions are necessary to detect attacks. Privileged Users One area we tend to forget is the folks who hold the keys to the kingdom. Yes, administrators and other folks who hold privileged access to the resources that drive your organization. This is also a favorite for the auditors out there – perhaps something to do with low hanging fruit – but we see a lot of folks look to advanced monitoring to address an audit deficiency. So to monitor activity on the part of your privileged users, you’ll move towards identity and user activity monitoring first. These data types allow you to identify who is doing what, and where, to detect malfeasance. From there you add file integrity monitoring – changing system files is an easy way for someone with access to make sure they can maintain it, and also to hide their trail. Database monitoring would then come next, as users changing database access roles can indicate something amiss. The point here is you’ve probably been doing security far too long to trust anyone, and enhanced monitoring can provide the data you need to understand what those insiders are really doing on your key systems. Political Land Mines Any time new technologies are introduced, someone has to do the work. Monitoring up the Stack is no different, and perhaps a bit harder because it crosses multiple fiefdoms organizations and requires consensus, which translates roughly to politics. And politics means you can’t get anything done without cooperation from your coworkers. We can’t stress this enough: many good projects die not because of need, budget, or technology, but due to a lack of interdepartmental cooperation. And why not? Most of the time the people who need the data – or even fund the project – are not the folks who have to manage things on a day to day basis. As an example, DAM installation and maintenance falls on the shoulders of database administrators. All they see is more work. Not only do they have to install the product, but they get blamed for any performance and reliability issues it causes. Pouring more salt into the wound, the DAM system is designed to monitor database administrators! Not only is the DBA’s job now harder because they can’t use their favorite

Share:
Read Post

Incident Response Fundamentals: Introduction

Over the past year, as an industry we have come to realize that we are dealing with different adversaries using different attack techniques with different goals. Yes, the folks looking for financial gain by compromising devices are still out there. But add a well-funded, potentially state-sponsored, persistent and patient adversary to the mix, and we need to draw a new conclusion. Basically, we now must assume our networks and systems are compromised. That is a tough realization, but any other conclusion doesn’t really jive with reality, or at least the reality of everyone we talk to. For a number of years, we’ve been calling bunk on the concept of “getting ahead of the threat” – most of the things viewed as proactive. Anyone trying to take such action has been disappointed by their ability to stop attacks, regardless of how much money or political capital they expended to drive change. Basing our entire security strategy on the belief that we can stop attacks if we just spend enough, tune enough, or comply enough; is no longer credible – if it ever was. We need to change our definition of success from stopping an attack (which would be nice, but isn’t always practical) to reacting faster and better to attacks, and containing the damage. We’re not saying you should give up on trying to prevent attacks – but place as much (or more) emphasis on detecting, responding to, and mitigating them. This has been a common theme in Securosis research since the beginning, and now we will document exactly what that means and how to get there. React Faster We don’t get a lot of push-back anymore on our position that organizations can’t stop all attacks. From a certain perspective that is progress, and we also believe many security professionals have spent a lot of time managing expectations internally so there is an understanding that perfect security cannot be achieved (or that management is unwilling to fund it and compromise everything else to in favor of security improvements). But following that concept to the next step means we need to get much better at detecting attacks sooner. We have already documented a number of approaches at the network layer in terms of monitoring everything and looking for not normal. They also apply to the application (part 1 & part 2) and database (part 1 & part 2), which we have been talking about in our Monitoring up the Stack series. So in the first part of this new series, we will talk about the data collection infrastructure you should be thinking about, what kind of organizational model allows you to react faster, and what to do before the attack is detected. If you know you are being attacked, you are already ahead of the vast majority of companies out there. But what then? And Better Once you understand you are under attack, then your incident response process needs to kick in. Most organizations do this poorly because they have neither the process nor the skills to figure out what’s happening and do something useful about it. Many organizations have a documented incident response program, but that doesn’t mean it’s effective or that the organization has embraced what it really means to respond to an incident. And this is about much more than just tools and flowcharts. Unless the process is well established and somewhat second nature, it will fail under duress – which is the definition of an incident. It is also important to remember that this process touches much more than just IT. It must involve other organizations (legal, HR, operational risk, etc.), in order to actually manage or mitigate the organizational risk of any attack. One of the things that Rich’s emergency response experience has shown is that chain of command is critical; and everyone must be in alignment on process, responsibilities, and accountabilities; before the incident happens. Again, a lot of this stuff seems like common sense (and it is!), but we have seen few organizations that do this well, so we’ll walk through what we mean by reacting better throughout the series. Before, During, and After The concept we will come back to throughout this series is before, during, and after the attack. This will provide context for the different things that must happen based on where you are within the attack lifecycle. Before: Figure out what data to monitor, how much of it is useful, how to make use of it, and how long to retain it, is key to building the infrastructure for persistent monitoring. This must happen before the attack, because you only get one chance to collect that data, when things are happening. You don’t get to go back and record it after the fact (unless you completely fail to learn from the first attack, and they hit you again – not a good way to get a second chance!). During: How can you contain the damage as quickly as possible? By identifying root cause accurately and remediating effectively. We’ll dig into how to identify the attack, who to work with to provide the data you need, and how to do this in the heat of battle. After: Once the attack has been contained, focus shifts to making sure it doesn’t happen again. In these posts we’ll discuss the forensics process, and necessary tools and skills – as well as how to maintain chain of custody and the post mortem required to learn something from a difficult situation. We’ll also discuss the current state of threat management tools, including SIEM, IDS/IPS, and network packet capture, to define their place in our approach. Finally we consider how network security is evolving and what kind of architectural constructs you should be thinking about as you revisit your data collection and defensive strategies. At the end of this series you will have a good overview of how to deal with all sorts of threats and a high level process for identifying the issues, containing the damage, and using the feedback loop to ensure you don’t make the same mistakes again. That’s the plan, anyway. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.