Login  |  Register  |  Contact
Wednesday, July 14, 2010

NSO Quant: Define Policies Sub-Process

By Mike Rothman

So many attacks, so little time. If you are like pretty much everyone else we talk to, you are under the gun to figure out what’s happening and understand what’s under attack and to fix it. Right?

As you look to engage your monitoring process, you’ll be making a ton of decisions once you figure out what you have and what’s in scope. These decisions are the policies that will govern your monitoring, set the thresholds, determine how data is analyzed and trigger the alerts.

Define Policies

In the high level monitoring process map, we described the define policies step as follows:

Define the depth and breadth of the monitoring process, what data will be collected from the devices, and frequency of collection.

The scope of this project is specifically on firewalls, IDS/IPS and servers, so any of our samples will be tailored to those device types, but there is nothing in this process that precludes monitoring other network, security, computing, application and/or data capture devices.

There are five sub-processes in this step:

  1. Monitors
  2. Correlation Rules
  3. Alerts
  4. Validation/Escalation
  5. Document

In terms of our standard disclaimer, we build these sub-processes for organizations that want to undertake a monitoring initiative. We don’t make any assumptions about the size of company or whether a tool set will be used. Obviously the process will change depending on your specific circumstances as you’ll do some steps and not others. And yes, building your own monitoring environment is pretty complicated, but we think it’s important to give you a feel for everything that is required, so you can compare apples to apples relative to building your own versus buying a product(s) or using a service.


Our first set of policies will be around monitoring, specifically which activities on which devices will be monitored. We’ve got to have policies on frequency of data collection and also data retention. We also have to give some thought to the risk to the organization based on each policy. For example, your firewall will be detecting all sorts of attacks, so those events have one priority. But if you find a new super-user account on the database server, that’s a different level of risk.

The more you think through what you are monitoring, what conclusions you can draw from the data, and ultimately what the downside is when you find something from that device, it’ll make the rest of the monitoring process go smoothly.

Correlation Rules

The issue with correlation is that you need to know what you are looking for in order to set rules to find it. That also requires you understand the interactions between the different data types. How do you know that? Basically you need to do a threat modeling exercise based on the kinds of attack vectors want to find. Sure we’d love to tell you that your whiz-bang monitoring thingy will find all the attacks out of the box and you can get back to playing Angry Birds. But it doesn’t work that way.

Thus you get to spend some time on the white board mapping out the different kinds of attacks, suspect behavior, or exploits you’ve seen and expect to see. Then you map out how you’d detect that attack including specifics about the data types and series of events that need to happen for the attack to be successful. Finally you identify the correlation policy. You need to put yourself in the shoes of the hacker and think like them. Easy huh? Not so much, but at least it’s fun. This is more art than science.

Depending on the device, there may be some resources (whether it’s an open source tools with a set of default policies, think Snort or OSSEC or OSSIM) that can get you started. Again, don’t just think you’ll be able to download some stuff and get moving. You need to actively think about what you are trying to detect and build your policies accordingly. In order to maintain sanity, understand that defining (and refining and tuning and refining and tuning some more) correlation policies is an ongoing process.

Finally, it’s important to realize that you cannot possibly build a threat model for every kind of attack that may or may not be launched at your organization. So you are going to miss some stuff, and that’s part of the game. The objective of monitoring is to react faster, not react perfectly to every attack. The threat modeling exercise focuses you on watching for the most significant risks to your organization.


In this step, you define the scenario that triggers the alerts. You need to define what is an information only alert, as well as a variety of alert severities, depending on the attack (and the risk it represents). The best way we know to do this is to go back to your threat models. For each attack vector, there are likely a set of situations that are low priority and a set that are higher. What are the thresholds that determine the difference? Those are part of your alerting policies, since you want to make sure how you are doing things and why is documented clearly.

Next you need to define your notification strategy, by alert severity. Will you send a text message or an email or does a big red light flash in the SOC calling all hands on deck? We spent a lot of time during the scoping process getting consensus, but it’s not over at the end of that step. You also need to make sure you’ve got everyone on the same page relative to how you are going to notify them when an alert fires. You don’t want an admin to miss something important because they expected a text and don’t live in their email.


So as we go through our policy definition efforts, we’ve built threat models and the associated correlation and alerting policies. Next we need to think about how we prove whether the attack is legitimate or whether it’s just a false positive. So here you map out what you think an analyst can/should do to prove an attack. Maybe it’s looking at the log files or logging into the device and/or confirming with other devices. The specific validation activities will vary depending on the threat.

At the end of the validation step, you want to definitively be able to answer the question: Is this attack real? So your policies should define what real means in this context and set expectations for the level of substantiation you expect to validate the attack.

You also need to think about escalation, depending on the nature of the alert. Does this kind of alert go to network ops or security? Is it a legal counsel thing? What is the expected response time? How much data about the alert do you need to send along? Does the escalation happen via a trouble ticketing system? Who is responsible for closing the loop? All of these issues need to be worked through and agreed to by the powers that be. Yes, you’ve got to get consensus (or at least approval) for the escalation policies, since it involves sending information to other groups and expecting a specific response.


Finally, you’ve worked through all the issues. Well all the issues that you can model and know about, so the last step is to document all the policies and communicate responsibilities and expectations to the operations teams (or anyone else in the escalation chain). These policies are living documents, and will change frequently as new attacks are found in the wild and new devices/applications appear in your network.


You also need to think about whether you are going to be building a baseline from the data you collect. This involves monitoring specific devices for a period of time, assuming what you find is normal, and then looking for behavior that is not normal. This is certainly an approach to streamline the process of getting your monitoring system on line, but understand this involves making assumptions about what is good or bad and those assumptions may not be valid.

We prefer both, in terms of doing the threat modeling exercise, as well as establishing a baseline. We’d say it provides the best of both worlds, but that would be corny.

Device Type Variance

For each device type (firewall, IDS/IPS, servers) you largely go through the same process. You’ve got to figure out what data you will collect, any specific correlation that makes sense, what you’ll alert on, and how you’ll validate the alert and who gets the joy of receiving the escalation. Obviously each device provides different data about different aspects of the attack. By relying on the threat models to guide your policies, you can focus your efforts without trying to boil the ocean.

Big vs. Small Company Considerations

The biggest difference we tend to see in how big companies versus small companies do monitoring is relative to the amount of data and the number of policies used to drive correlation and alerting. Obviously big companies have economics to invest in tools and services to get the tools operational (and keep them operational).

So small companies need to compromise and that means you aren’t going to be able to find everything, so don’t try. Again, that’s where the threat models come into play. By focusing on the highest risk models, you can go through the process and do a good job with a handful of threat models. Large companies on the other hand need to be aware of analysis/paralysis since there is literally an infinite number of threat models, correlation policies and alerts that can be defined. At some point the planning has to stop and the doing has to start.

And speaking of starting to do things, tomorrow we’ll go through the Collect and Store steps as we move into the actually monitoring something.

—Mike Rothman

Simple Ideas to Start Improving the Economics of Cybersecurity

By Rich

Today Howard Schmidt meets with Secretary of Commerce Gary Locke and Department of Homeland Security Secretary Janet Napolitano to discuss ideas for changing the economics of cybersecurity. Howard knows his stuff, and recognizes that this isn’t a technology problem, nor something that can be improved with some new security standard or checklist. Crime is a function of economics, and electronic crime is no exception.

I spend a lot of time thinking about these issues, and here are a few simple suggestions to get us started:

  • Eliminate the use of Social Security Numbers as the primary identifier for our credit history and to financial accounts. Phase the change in over time. When the banks all scream, ask them how they do it in Europe and other regions.
  • Enforce a shared-costs model for credit card brands. Right now, banks and merchants carry nearly all the financial costs associated with credit card fraud. Although PCI is helping, it doesn’t address the fundamental weaknesses of the current magnetic stripe based system. Having the card brands share in losses will increase their motivation to increase the pace of innovation for card security.
  • Require banks to extend the window of protection for fraudulent transactions on consumer and business bank accounts. Rather than forcing some series of fraud detection or verification requirements, making them extend the window where consumers and businesses aren’t liable for losses will motivate them to make the structural changes themselves. For example, by requiring transaction confirmation for ACH transfers over a certain amount.
  • Within the government, require agencies to pay for incident response costs associated with cybercrime at the business unit level, instead of allowing it to be a shared cost borne by IT and security. This will motivate individual units to better prioritize security, since the money will come out of their own budgets instead of being funded by IT, which doesn’t have operational control of business decisions.

Just a few quick ideas to get us started. All of them are focused on changing the economics, leaving the technical and process details to work themselves out.

There are two big gaps that aren’t addressed here:

  • Critical infrastructure/SCADA: I think this is an area where we will need to require prescriptive controls (air gaps & virtual air gaps) in regulation, with penalties. Since that isn’t a pure economic incentive, I didn’t include it above.
  • Corporate intellectual property: There isn’t much the government can do here, although companies can adopt the practice of having business units pay for incident response costs (no, I don’t think I’ll live to see that day).

Any other ideas?


Incite 7/14/2010: Mello Yello

By Mike Rothman

I’m discovering that you do mellow with age. I remember when I first met the Boss how mellow and laid back her Dad was. Part of it is because he doesn’t hear too well anymore, which makes him blissfully unaware of what’s going on. But he’s also mellowed, at least according to my mother in law. He was evidently quite a hothead 40 years ago, but not any more. She warned me I’d mellow too over time, but I just laughed. Yeah, yeah, sure I will.

They call him Mello Yello... But sure enough, it’s happening. Yes, the kids still push my buttons and make me nuts, but most other things just don’t get me too fired up anymore. A case in point: the Securosis team got together last week for another of our world domination strategy sessions. On the trip back to the airport, I heard strange music. We had rented a Kia Soul, with the dancing hamsters and all, so I figured it might be the car. But it was my iPad cranking music.

WTF? What gremlin turned on my iPad? Took me a few seconds, but I found the culprit. I carry an external keyboard with the iPad and evidently it turned on, connected to the Pad, and proceeded to try to log in a bunch of times with whatever random strings were typed on the keyboard in my case. Turns out the security on the iPad works – at least for a brute force attack. I was locked out and needed to sync to my computer in the office to get back in.

I had my laptop, so I wasn’t totally out of business. But I was about 80% of the way through Dexter: Season 2 and had planned to watch a few more episodes on the flight home. Crap – no iPad, no Dexter. Years ago, this would have made me crazy. Frackin’ security. Frackin’ iPad. Hate hate hate. But now it was all good. I didn’t give it another thought and queued up for an Angry Birds extravaganza on my phone.

Then I remembered that I had the Dexter episodes on my laptop. Hurray! And I got an unexpected upgrade, with my very own power outlet at my seat, so my mostly depleted battery wasn’t an issue. Double hurray!! I could have made myself crazy, but what’s the point of that?

Another situation arose lately when I had to diffuse a pretty touchy situation between friends. It could have gotten physical, and therefore ugly with long-term ramifications. But diplomatic Mike got in, made peace, and positioned everyone to kiss and make up later. Not too long ago, I probably would have gotten caught up in the drama and made the situation worse.

As I was telling the Boss the story, she deadpanned that it must be the end of the world. When I shot her a puzzled look, she just commented that when I’m the voice of reason, armageddon can’t be too far behind.

– Mike.

Photo credits: “mello yello” originally uploaded by Xopher Smith

Recent Securosis Posts

  1. School’s out for Summer
  2. Taking the High Road
  3. Friday Summary: July 9 2010
  4. Top 3 Steps to Simplify DLP Without Compromise
  5. Preliminary Results from the Data Security Survey
  6. Tokenization Architecture – The Basics
  7. NSO Quant: Enumerate and Scope Sub-Processes

Incite 4 U

Since we provided an Incite-only mailing list option, we’ve started highlighting our other weekly posts above. One to definitely check out is the Preliminary Results from the Data Security Survey, since there is great data in there about what’s happening and what’s working. Rich will be doing a more detailed analysis in the short term, so stay tuned for that.

  1. You can’t be half global… – Andy Grove (yeah, the Intel guy) started a good discussion about the US tech industry and job creation. Gunnar weighed in as well with some concerns about lost knowledge and chain of experience. I don’t get it. Is Intel a US company? Well, it’s headquartered in the US, but it’s a global company. So is GE. And Cisco and Apple and IBM and HP. Since when does a country have a scoreboard for manufacturing stuff? The scoreboard is on Wall Street and it’s measured in profit and loss. So big companies send commodity jobs wherever they find the best mix of cost, efficiency, and quality. We don’t have an innovation issue here in the US – we have a wage issue. The pay scales of some job functions in the US have gone way over their (international) value, so those jobs go somewhere else. Relative to job creation, free markets are unforgiving and skill sets need to evolve. If Apple could hire folks in the US to make iPhones for $10 a week, I suspect they would. But they can’t, so they don’t. If the point is that we miss out on the next wave of innovation because we don’t assemble the products in the US, I think that’s hogwash. These big companies have figured out sustainable advantage is moving out of commodity markets. Too bad a lot of workers don’t understand that yet. – MR

  2. Tinfoil hats – Cyber Shield? Really? A giant monitoring project ? I don’t really understand how a colossal systems monitoring project is going to shield critical IT infrastructure. It may detect cyber threats, but only if they know what they are looking for. The actual efforts are classified, so we can’t be sure what type of monitoring they are planning to do. Maybe it’s space alien technology we have never seen before, implemented in ways we could never have dreamed of. Or maybe it’s a couple hundred million dollars to collect log data and worry about analysis later. Seriously, if the goal here is to protect critical infrastructure, here’s some free advice: take critical systems off the freaking’ Internet! Yeah, putting these systems on the ‘Net many years ago was a mistake because these organizations are both naive and cheap. Admit the mistake and spend your $100M on private systems that are much easier to secure, audit, and monitor. The NSA has plenty of satellites … I am sure they can spare some bandwidth for power and other SCADA control systems. If it’s really a matter of national security to protect these systems, do that. Otherwise it’s just another forensic tool to record how they were hacked. – AL

  3. Conflict of interest much? – Testing security tools is never easy, and rarely reflects how they would really work for you. Mike covered this one already, but it is, yet again, rearing its head. NSS Labs is making waves with its focus on “real world” antivirus software testing. Rather than running tools against a standard set of malware samples, they’ve been mixing things up and testing AV tools against live malware (social engineering based), and modifications of known malware. The live test gives you an idea of how well the tools will work in real life with actual users behind them. The modifications tests give you an idea of whether the tools will detect new variants of known attacks. Needless to say, the AV vendors aren’t happy and are backing their own set of “standards” for testing while disparaging NSS, except the ones who scored well. I realize this is how the world works, but it’s still depressing. – RM

  4. Automating firewall ops – Speaking of product reviews, NetworkWorld published one this week on firewall operations tools. You know, those tools that suck in firewall configs, analyze them and maybe even allow you to change said firewalls without leaving a hole so big the Titanic could sail through? Anyhow, this still feels like a niche market even though there are 5 players in it, because you need to have a bunch of firewalls to take advantage of such a tool. Clearly these tools provide value but ultimately it comes back to pricing. At the right price the value equation adds up. Ultimately they need to be integrated with the other ops tools (like patch/config, SIEM/LM, etc.), since the swivel chair most admins use to switch between different management systems is worn out. – MR

  5. Eternal breach – Although credit cards are time limited (they come with expiration dates), a lot of other personal information lives longer than you do. Take your Social Security Number or private communications… once these are lost in a breach, any breach, the data stays in circulation and remains sensitive. That’s why the single year of credit monitoring offered by most organizations in their breach response is a bad joke. The risk isn’t limited to a year, so this is a CYA gesture. Help Net Security digs into this often ignored problem. I don’t really expect things to get any better; our personal information is all over the darn place, and we are at risk as soon as it’s exposed once… from anywhere. I’m going to crawl back into my bunker now. – RM

  6. Deals, Good ‘n’ Plenty – There is no stopping the ongoing consolidation in the security space. Last week the folks at Webroot bought a web filtering SaaS shop called BrightCloud. Clearly you need both email and web filtering (yeah, that old content thing), so it was a hole in Webroot’s move towards being a SaaS shop. Yesterday we also saw GFI acquire Sunbelt’s VIPRE AV technology. This seems like a decent fit because over time distribution leverage is key to ongoing sustainability. That means you need to pump more stuff into existing customers. And given the price set by Sophos’ private equity deal, now was probably a good time for Sunbelt to do a deal, especially if they were facing growing pains. Shavlik seems a bit at risk here, since they OEM Sunbelt and compete with GFI. – MR

  7. E-I-eEye-Oh! – During the last economic downturn, the dot-com bust days of 2000, HR personnel used to love to call people ‘job hoppers’. “Gee, it seems you have had a new job every 24 months for the last 6 years. We are really looking for candidates with a more stable track record.” It was a lazy excuse to dismiss candidates, but some of them believed it. I think that mindset still persists, even though the average job tenure in development is shy of 21 months (much shorter for Mike!), and just slightly better for IT. Regardless, that was the first thing that popped into my head when I learned that Marc Maiffret has jumped ship from FireEye back to eEye. Dennis Fisher has a nice interview with Marc over at Threatpost. Feels like just a few weeks ago he joined FireEye, but as most hiring managers will tell you, team chemistry is as important as job skills when it comes to hiring. I was sad to see Marc leave eEye – was it four years ago? – to start Invenio. At the time eEye was floundering, and from my perspective product management was poorly orchestrated. I am sure the investors were unhappy, but Marc seemed to get a disproportionate amount of the heat, and eEye lost a talented researcher. The new management team over at eEye still has their hands full with this reclamation project, but Marc’s a good addition to their research team. If eEye seriously wants to compete with Qualys and Rapid7, they need all the help they can get, and this looks like a good fit for both the company and Marc. Good luck, guys! – AL

  8. Low Hanging Fruit doesn’t need to be expensive – Fast, cheap, or secure. Pick two. Or so the saying goes, but that’s especially true for SMB folks trying to protect their critical data. It ain’t cheap doing this security stuff, or is it? The reality is that given the maturity of SaaS options, most SMB folks should be looking at outsourcing critical systems (CRM, ERP, etc.). And for those systems still in-house, as well as networks and endpoints, you don’t need to make it complicated. Dark Reading presents some ideas, but we have also written quite a bit on fundamentals and low hanging fruit. No, world class security is not low hanging fruit, but compared to most other SMB (and even enterprise-size) companies, covering the fundamentals should be good enough. And no, I’m not saying to settle for crap security, but focusing on the fundamentals, especially the stuff that doesn’t cost much money (like secure configurations and update/patch) can make a huge difference in security posture without breaking the bank. – MR

—Mike Rothman

Tuesday, July 13, 2010

NSO Quant: Enumerate and Scope Sub-Processes

By Mike Rothman

As we get back to the Network Security Operations Quant series, our next step is to take each of the higher level process maps and break each step down into a series of subprocesses. Once these subprocesses are all posted and vetted (with community involvement – thanks in advance!), we’ll survey you all to see how many folks actually perform these specific steps in day-to-day operations.

We will first go into the subprocesses around Monitoring firewalls, IDS/IPS, and servers. The high level process map is below and you can refer to the original post for a higher-level description of each step.

The first two steps are enumerate, which means finding all the security, network, and server devices in your environment, and scope, which means determining the devices to be covered by the monitoring activity. It took a bit of discussion between us analyst types to determine which came first, the chicken or the egg – or in this case, the enumeration or the scoping step.

Ultimately we decided that enumeration really comes first because far too many organizations don’t know what they have. Yes, you heard that right. There are rogue or even authorized devices that slipped through the cracks, creating potential exposure. It doesn’t make sense to try figuring out the scope of the monitoring initiative without knowing what is actually there.

So we believe you must start with enumeration and then figure out what is in scope.


The enumeration step has to to with finding all the security, network, and server devices in your environment.

There are four major subprocesses in this step:

  1. Plan: We believe you plan the work and then work the plan, so you’ll see a lot of planning subprocesses throughout this research. In this step you figure out how you will enumerate, including what kinds of tools and techniques you will use. You also need to identify the business units to search, mapping their specific network domains (assuming you have business units) and developing a schedule for the activities.
  2. Setup: Next you need to set up your enumeration by acquiring and installing tools (if you go the automated route) or assembling your kit of scripts and open source methods. You also need to inform each group that you will be in their networks, looking at their stuff. In highly distributed environments it may be problematic to do ping sweeps and the like without giving everybody a ‘heads up’ first. You also need to get credentials (where required) and configure the tools you’ll be using.
  3. Enumerate: Ah, yes, you actually need to do the work after planning and setting up. You may be running active scans or analyzing passive collection efforts. You also need to validate what your tools tell you, since we all know how precise that technology stuff can be. Once you have the data, you’ll be spending some time filtering and compiling the results to get a feel for what’s really out there.
  4. Document: Finally you need to prepare an artifact of your efforts, if only to use in the next step when you define your monitoring scope. Whether you generate PDFs or kill some trees is not relevant to this subprocess – it’s about making sure you’ve got a record of what exists (at this point in time), as well as having a mechanism to check for changes periodically.

Device Type Variances

Is there any difference between enumerating a firewall, an IDS/IPS, and a server – the devices we are focused on with this research project? Not really. You are going to use the same tools and techniques to identify what’s out there because fundamentally they are all IP devices.

There will be some variation in what you do to validate what you’ve found. You may want to log into a server (with credentials) to verify it is actually a server. You may want to blast packets at a firewall or IDS/IPS. Depending on the size of your environment, you may need to statistically verify a subset of the found devices. It’s not like you can log into 10,000 server devices. Actually you could, but it’s probably not a good idea.

You are looking for the greatest degree of precision you can manage, but that must be balanced with common sense to figure out how much validation you can afford.

Large vs. Small Company Considerations

One of the downsides of trying to build generic process maps is you try to factor in every potential scenario and reflect that in the process. But in the real world, many of the steps in any process are built to support scaling for large enterprise environments. So for each subprocess we comment on how things change depending on whether you are trying to monitor 10 or 10,000 devices.

Regarding enumeration, the differences crop up both when planning the also during the actual enumeration process – specifically when verifying what you found. Planning for a large enterprise needs to be pretty detailed to cover a large of IP address space (likely several different spaces) and there may be severe ramifications to disruptions caused by the scanning. Not that smaller companies don’t care about disruption, but with fewer moving parts there is a smaller chance of unforseen consequences.

Clearly the verification aspect of enumeration varies, depending on how deeply you verify. There is a lot of information you can gather here, so it’s a matter of balancing time to gather, against time to verify, against the need for the data.


Once we’ve finished enumeration it’s time to figure out what we are actually going to monitor on an ongoing basis. This can be driven by compliance (all devices handling protected data must be monitored) or a critical application. Of course this tends not to be a decision you can make arbitrarily by yourself, so a big part of the scoping process ensures you get buy-in on what you decide to monitor.

Here are the four steps in the Scope process:

  1. Identify Requirements: Monitoring is not free, though your management may think so. So we need a process to figure out why you monitor, and from that what to monitor. That means building a case for monitoring devices, potentially leveraging things like compliance mandates and/or best practices. You also need to meet with the business users, risk/compliance team, legal counsel, and other influencers to understand what needs to be monitored from their perspectives and why.
  2. Specify Devices: Based on those requirements, weigh each possible device type against the requirements and then figure out which devices of each type should be monitored. You may look to geographies, business units, or other means to segment your installed base into device groups. In a perfect world you’d monitor everything, but the world isn’t perfect. So it’s important to keep economic reality in mind when deciding how deeply to monitor what.
  3. Select Collection Method: For each device you’ll need to figure out how you will collect the data, and what data you want. This may involve research if you haven’t fully figured it out yet.
  4. Document: Finally you document the devices determined in scope, and then undertake the fun job of achieving consensus. Yes, you already asked folks what should be monitored when identifying requirements, but we remain fans of both asking ahead of time and then reminding them what you’ve heard, and also confirming they still see agree when it comes time to start doing something. The consensus building can add time – which is why most folks skip it – but minimizes the chance that you’ll be surprised down the road. Remember, security folks hate surprises.

Device Type Variance

The process is the same whether you are scoping a firewall, IDS/IPS, or server. Obviously the techniques used to collect data vary by device type, so you’ll need to research each type separately, but the general process is the same.

Large vs. Small Company Considerations

These are generally the same as in the Enumerate process. The bigger the company, the more moving pieces, the harder requirements gathering is, and the more difficulty in getting a consensus. Got it? OK, that was a bit tongue in cheek, but as a security professional trying to get things done, you’ll need to figure out the level of research and consensus to attempt with each of these steps. Some folks would rather ask for forgiveness, but as you can imagine there are risks with that.

The good news is there is a lot of leverage in figuring out how to collect data from the various device types. Doing the research on collecting data from Windows or Linux servers is the same whether you have 15 or 1,500. The same for firewalls and IDS/IPS devices. But you’ll spend the extra time gaining consensus, right?

Tomorrow we’ll talk about defining the policies, which is more detailed due to the number of policies you need to define.

—Mike Rothman

Tokenization Architecture: The Basics

By Rich

Fundamentally, tokenization is fairly simple. You are merely substituting a marker of limited value for something of greater value. The token isn’t completely valueless – it is important within its application environment – but that value is limited to the environment, or even a subset of that environment.

Think of a subway token or a gift card. You use cash to purchase the token or card, which then has value in the subway system or a retail outlet. That token has a one to one relationship with the cash used to purchase it (usually), but it’s only usable on that subway or in that retail outlet. It still has value, we’ve just restricted where it has value.

Tokenization in applications and databases does the same thing. We take a generally useful piece of data, like a credit card or Social Security Number, and convert it to a local token that’s useless outside the application environment designed to accept it. Someone might be able to use the token within your environment if they completely exploit your application, but they can’t then use that token anywhere else. In practical terms, this not only significantly reduces risks, but also (potentially) the scope of any compliance requirements around the sensitive data.

Here’s how it works in the most basic architecture:

  1. Your application collects or generates a piece of sensitive data.
  2. The data is immediately sent to the tokenization server – it is not stored locally.
  3. The tokenization server generates the random (or semi-random) token. The sensitive value and the token are stored in a highly-secured and restricted database (usually encrypted).
  4. The tokenization server returns the token to your application.
  5. The application stores the token, rather than the original value. The token is used for most transactions with the application.
  6. When the sensitive value is needed, an authorized application or user can request it. The value is never stored in any local databases, and in most cases access is highly restricted. This dramatically limits potential exposure.

For this to work, you need to ensure a few things:

  1. That there is no way to reproduce the original data without the tokenization server. This is different than encryption, where you can use a key and the encryption algorithm to recover the value from anywhere.
  2. All communications are encrypted.
  3. The application never stores the sensitive value, only the token.
  4. Ideally your application never even touches the original value – as we will discuss later, there are architectures and deployment options to split responsibilities; for example, having a non-user-accessible transaction system with access to the sensitive data separate from the customer facing side. You can have one system collect the data and send it to the tokenization server, another handle day to day customer interactions, and a third for handling transactions where the real value is needed.
  5. The tokenization server and database are highly secure. Modern implementations are far more complex and effective than a locked down database with both values stored in a table.

In our next posts we will expand on this model to show the architectural options, and dig into the technology itself. We’ll show you how tokens are generated, applications connected, and data stored securely; and how to make this work in complex distributed application environments.

But in the end it all comes down to the basics – take something of wide value and replacing it with a token with restricted value.

Understanding and Selecting a Tokenization Solution:


Preliminary Results from the Data Security Survey

By Rich

We’ve seen an absolutely tremendous response to the data security survey we launched last month. As I write this we are up to 1,154 responses, with over 70% of respondents completing the entire survey. Aside from the people who took the survey, we also received some great help building the survey in the first place (especially from the Security Metrics community). I’m really loving this entire open research thing.

We’re going to close the survey soon, and the analysis will probably take me a couple weeks (especially since my statistics skills are pretty rudimentary). But since we have so much good data, rather than waiting until I can complete the full analysis I thought it would be nice to get some preliminary results out there.

First, the caveats. Here’s what I mean by preliminary:

These are raw results right out of SurveyMonkey. I have not performed any deeper analysis on them, such as validating responses, statistical analysis, normalization, etc. Later analysis will certainly change the results, and don’t take these as anything more than an early peek.

Got it? I know this data is dirty, but it’s still interesting enough that I feel comfortable putting it out there.

And now to some of the results:


We had a pretty even spread of organization sizes:

Organization size
Less than 100 101-1000 1001-10000 10001-50000 More than 50000 Response Count
Number of employees/users 20.3% (232) 23.0% (263) 26.4% (302) 17.2% (197) 13.2% (151) 1,145
Number of managed desktops 25.8% (287) 26.9% (299) 16.4% (183) 10.2% (114) 1,113
  • 36% of respondents have 1-5 IT staff dedicated to data security, while 30% don’t have anyone assigned to the job (this is about what I expected, based on my client interactions).
  • The top verticals represented were retail and commercial financial services, government, and technology.
  • 54% of respondents identified themselves as being security management or professionals, with 44% identifying themselves as general IT management or practitioners.
  • 53% of respondents need to comply with PCI, 48% with HIPAA/HITECH, and 38% with breach notification laws (seems low to me).

Overall it is a pretty broad spread of responses, and I’m looking forward to digging in and slicing some of these answers by vertical and organization size.


Before digging in, first a major design flaw in the survey. I didn’t allow people to select “none” as an option for the number of incidents. Thus “none” and “don’t know” are combined together, based on the comments people left on the questions. Considering how many people reviewed this before we opened it, this shows how easy it is to miss something obvious.

  • On average, across major and minor breaches and accidental disclosures, only 20-30% of respondents were aware of breaches.
  • External breaches were only slightly higher than internal breaches, with accidental disclosures at the top of the list. The numbers are so close that they will likely be within the margin of error after I clean them. This is true for major and minor breaches.
  • Accidental disclosures were more likely to be reported for regulated data and PII than IP loss.
  • 54% of respondents reported they had “About the same” number of breaches year over year, but 14% reported “A few less” and 18% “Many less”! I can’t wait to cross-tabulate that with specific security controls.

Security Control Effectiveness

This is the meat of the survey. We asked about effectiveness for reducing number of breaches, severity of breaches, and costs of compliance.

  • The most commonly deployed tools (of the ones we surveyed) are email filtering, access management, network segregation, and server/endpoint hardening.
  • Of the data-security-specific technologies, web application firewalls, database activity monitoring, full drive encryption, backup tape encryption, and database encryption are most commonly deployed.
  • The most common write-in security control was user awareness.
  • The top 5 security controls for reducing the number of data breaches were DLP, Enterprise DRM, email filtering, a content discovery process, and entitlement management. I combined the three DLP options (network, endpoint, and storage) since all made the cut, although storage was at the bottom of the list by a large margin. EDRM rated highly, but was the least used technology.
  • For reducing compliance costs, the top 5 rated security controls were Enterprise DRM, DLP, entitlement management, data masking, and a content discovery process.

What’s really interesting is that when we asked people to stack rank their top 3 most effective overall data security controls, the results don’t match our per-control questions. The list then becomes:

  1. Access Management
  2. Server/endpoint hardening
  3. Email filtering

My initial analysis is that in the first questions we focused on a set of data security controls that aren’t necessarily widely used and compared between them. In the top-3 question, participants were allowed to select any control on the list, and the mere act of limiting themselves to the ones they deployed skewed the results. Can’t wait to do the filtering on this one.

We also asked people to rank their single least effective data security control. The top (well, bottom) 3 were:

  1. Email filtering
  2. USB/portable media encryption or device control
  3. Content discovery process

Again, these correlate with what is most commonly being used, so no surprise. That’s why these are preliminary results – there is a lot of filtering/correlation I need to do.

Security Control Deployment

Aside from the most commonly deployed controls we mentioned above, we also asked why people deployed different tools/processes. Answers ranged from compliance, to breach response, to improving security, and reducing costs.

  • No control was primarily deployed to reduce costs. The closest was email filtering, at 8.5% of responses.
  • The top 5 controls most often reported as being implemented due to a direct compliance requirement were server/endpoint hardening, access management, full drive encryption, network segregation, and backup tape encryption.
  • The top 5 controls most often reported as implemented due to an audit deficiency are access management, database activity monitoring, data masking, full drive encryption, and server/endpoint hardening.
  • The top 5 controls implemented for cost savings were reported as email filtering, server/endpoint hardening, access management, DLP, and network segregation.
  • The top 5 controls implemented primarily to respond to a breach or incident were email filtering, full drive encryption, USB/portable media encryption or device control, endpoint DLP, and server/endpoint hardening.
  • The top 5 controls being considered for deployment in the next 12 months are USB/portable media encryption or device control (by a wide margin), DLP, full drive encryption, WAF, and database encryption.

Again, all this is very preliminary, but I think it hints at some very interesting conclusions once I do the full analysis.


Friday, July 09, 2010

Top 3 Steps to Simplify DLP without Compromise

By Rich

Just when I thought I was done talking about DLP, interest starts to increase again. Below is an article I wrote up on how to minimize the complexity of a DLP deployment. This was for the Websense customer newsletter/site, but is my usual independent perspective.

One of the most common obstacles to a DLP deployment is psychological, not technical. With massive amounts of content and data streaming throughout the enterprise in support of countless business processes, the idea that we can somehow wrangle this information in any meaningful way, with minimal disruptions to business process, is daunting if not nigh on inconceivable. This idea is especially reinforced among security professionals still smarting from the pain of deploying and managing the constant false positives and tuning requirements of intrusion detection systems.

Since I started to cover DLP technologies about 7 years or so ago, I’ve talked with hundreds of people who have evaluated and/or deployed data loss prevention. Over the course of those conversations I’ve learned what tends to work, what doesn’t, and how to reduce the potential complexity of DLP deployments. Once you break the process down it turns out that DLP isn’t nearly as difficult to manage as some other security technologies, and even very large organizations are able to rapidly reap the benefits of DLP without creating time-consuming management nightmares.

The trick, as you’ll see, is to treat your DLP deployment as an incremental process. It’s like eating an elephant – you merely have to take it one bite at a time. Here are my top 3 tips, drawn from those hundreds of conversations:

1. Narrow your scope:

One of the most common problems with an initial DLP deployment is trying to start on too wide a scale. Your scope of deployment is defined by two primary factors – how many DLP components you deploy, and how many systems/employees you monitor. A full-suite DLP solution is capable of monitoring network traffic, integrating with email, scanning stored data, and monitoring endpoints. When looking at your initial scope, only pick one of the components to start with.

I usually recommend starting with anything other than endpoints, since you then have fewer components to manage. Most organizations tend to start on the network (usually with email) since it’s easy to deploy in a passive mode, but I do see some companies now starting with scanning stored data due to regulatory requirements.

In either case, stick with one component as you develop your initial policies and then narrow the scope to a subset of your network or storage. If you are in a mid-sized organization you might not need to narrow too much, but in large organizations you should pick a subnet or single egress point rather than thinking you have to watch everything.

Why narrow the scope? Because in our next step we’re going to deploy our policies, and starting with a single component and a limited subset of all your traffic/systems provides the information you need to tune policies without being overwhelmed with incidents you feel compelled to manage.

2. Start with one policy:

Once you’ve defined your initial scope, it’s time to deploy a policy. And yes, I mean a policy, not many policies. The policy should be narrow and align with your data protection priorities; e.g. credit card number detection or a subset of sensitive engineering plans for partial document matching.

You aren’t trying to define a perfect policy out of the box; that’s why we are keeping our scope narrow. Once you have the policy ready, go ahead and launch it in monitoring mode. Over the course of the next few days you should get a good sense of how well the policy works and how you need to tune it. Many of you are likely looking for similar kinds of information, like credit card numbers, in which case the out of the box policies included in your DLP product may be sufficient with little to no tuning.

3. Take the next bite:

Once you are comfortable with the results you are seeing it’s time to expand your deployment scope. Most successful organizations start by expanding the scope of coverage (systems scanned or network traffic), and then add DLP components to the policy (storage, endpoint, other network channels).

Then it’s time to start the process over with the next policy.

This iterative approach doesn’t necessarily take very long, especially if you leverage out of the box policies. Unlike something like IDS you gain immediate benefits even without having to cover all traffic throughout your entire organization. You get to tune your policies without being overwhelmed, while managing real incidents or exposure risks.


Taking the High Road

By Mike Rothman

This is off topic but I need to vent a bit. I’ve followed the LeBron James free-agency saga with amusement. Thankfully I was in the air last night during the “Decision” TV special, so I didn’t have any temptation to participate in the narcissistic end of a self-centered two weeks. LeBron and his advisors did a masterful job of playing the media, making them believe anything was possible, and then doing the smartest thing and heading to Miami to join the Heat.

First off, I applaud all three All-Stars, who made economic sacrifices to give themselves a chance to win. They all understand that a ball-player’s legacy is not about how much money they made, but how many championships they won. Of course, the economic sacrifices are different for them – you know, whether to settle for $2 or $3 million less each year. Over 6 years that is big money, but they want to win and win now.

But that’s not what I want to talk about. I want to talk about how the Cavaliers’ owner, Dan Gilbert, responded to the situation. It makes you think about the high road versus the low road. Clearly Gilbert took the low road, basically acting like a spoiled child whose parents said they couldn’t upgrade to the iPhone 4. He had a tantrum – calling LeBron names and accusing him of giving up during the playoffs. The folks at the Bleacher report hit it right on the head.

I understand this guy’s feelings are hurt. LeBron (and his advisors) played him like a fiddle. They gave him hope that LeBron would stay, even though at the surface it would be a terrible decision – if the goal is to win championships. Over the past 8 years, LeBron doubled the net worth of the Cavs franchise, and that is the thanks he gets from the owner.

Can you see Bob Kraft of the Patriots having a similar tantrum? Or any of the top owners in the sport? Yes, Dan Gilbert really reflected the mood of his town. His frustration at losing the LeBron-stakes aligns with the prospect of losing a lot more in years to come. But as an owner, as the face of your franchise, you have to take the high road. You get a Cleveland sports columnist to write the hit piece making all those speculations.

But, you (and the rest of the franchise) need to act with class. Have the PR folks write a short statement thanking your departing star for 8 great years of sell-outs, wishing him the best of luck, and saying you look forward to seeing them in the Eastern Conference finals.

Most of all, you take a step back and you don’t say anything. That’s what I try to tell the kids when they are upset. And try to practice myself (failing most of the time, by the way). Don’t say anything because you’ll only make it worse and say something you’ll regret. I’m sure folks in Cleveland are happy with Dan Gilbert’s outburst, but the rest of the country sees a total ass having a tantrum in public. And overnight he made LeBron into a sympathetic figure. Which is probably what LeBron and his advisors wanted the entire time. Damn, that’s one smart power forward, probably enjoying the view from the high road.

—Mike Rothman

Friday Summary: July 9, 2010

By Adrian Lane

Today is the deadline for RSA speaker submissions, so the entire team was scrambling to get our presentation topics submitted before the server crash late rush. One of the things that struck me about the submission suggestions is that general topics are discouraged. RSA notes in the submission guidelines that 60% of the attendees have 10 or more years of security experience. I think the idea is that, if your audience is more advanced, introductory or general audience presentations don’t hold the audience’s attention so intermediate and advanced sessions are encouraged. And I bet they are right about that, given the success of other venues like BlackHat, Defcon, and Security B-Sides. Still, I wonder if that is the right course of action. Has security become a private club? Are we so caught up in the security ‘echo chamber’ we forget about the mid-market folks without the luxury of full-time security experts on staff? Perhaps security just is not very interesting without some novel new hack. Regardless, it seems like it’s the same group of us, year after year, talking about the same set of issues and problems.

From my perspective software developers are the weakest link in the security chain. Most coders don’t have 10 years of security experience. Heck, they don’t have two! Only a handful of people I know have been involved in secure code development practices for 10 years or more. But developers coming up to speed with security is one of the biggest wins, and advanced security topics may not be inaccessible to them. The balancing act between cutting-edge security discussions that keep researchers up to date, versus educating the people who can benefit most, is at issue.

I was thinking about this during our offsite this week while Rich and Mike talked about having their kids trained in martial arts when they are old enough. They were talking about how they want the kids to be able to protect themselves when necessary. They were discussing likely scenarios and what art forms they felt would be most useful for, well, not getting their asses kicked. And they also want the kids to derive many of the same secondary benefits of respect, commitment, confidence, and attention to detail many of us unwittingly gained when our parents sent us to martial arts classes. As the two were talking about their kids’ basic introduction to personal security, it dawned on me that this is really the same issue for developers. Not to be condescending and equate coders to children, but what was bugging me was the focus on the leaders in the security space at the expense of opening up the profession to a wider audience. Basic education on security skills doesn’t just help build up a specific area of education every developer needs – the entire approach to secure code development makes for better programmers. It reinforces the basic development processes we are taught to go through in a meaningful way.

I am not entirely sure what the ‘right’ answer is, but RSA is the biggest security conference, and application developers seem to be a very large potential audience that would greatly benefit from basic exposure to general issues and techniques. Secure code development practices and tools are, and hopefully will remain for the foreseeable future, a major category for growth in security awareness and training. Knowledge of these tools, processes, and techniques makes for better code.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

  • Uh, not so much.

Favorite Outside Posts

Project Quant Posts

Research Reports and Presentations

Top News and Posts

Blog Comment of the Week

This week’s winner is … no one. We had a strong case of ‘blog post fail’ so I guess we cannot expect comments.

—Adrian Lane

Thursday, July 08, 2010

School’s out for Summer

By Mike Rothman

I saw an interesting post on InformationWeek about protecting your network and systems from the influx of summer workers. The same logic goes for the December holidays – when additional help is needed to stock shelves, pack boxes, and sell things. These temporary folks can do damage – more because they have no idea what they can/should do rather than thanks to any malicious intent.

I’m not a big fan of some of the recommendations in the post. Like not providing Internet access. Common sense needs to rule the day, right? Someone in a warehouse doesn’t need corporate Internet access. But someone working in the call center might. It depends on job function.

But in reality, you don’t need to treat the temporary workers any different than full-time folks. You just need to actually do the stuff that you should be doing anyway. Here are a couple examples:

  • Training: Yes, it seems a bit silly to spend a few hours training temporary folks when they will leave in a month or two. On the other hand, it seems silly to have these folks do stupid things and then burn up your summer cleaning up after them.
  • Lock down machines: You have more flexibility to lock down devices for temporary workers, so do that. Whether it’s a full lockdown (using application white listing) or a lighter application set (using the application control stuff in the endpoint suite), either reduces the likelihood of your users doing something stupid, and of damage if they do.
  • Segment the network: If possible (and it should be), it may make sense to put these users on a separate network, again depending on their job functions. If they need Internet access, maybe give them a VPN pipe directly to the outside and restrict access to internal networks and devices.
  • Monitor Everything: Yes, you need to stay on your toes. Make sure you are looking for anomalous behavior and focused on reacting faster. We say that a lot, eh?

So again, workers come and go, but your security strategy should cover different scenarios. You can make some minor changes to factor in temporary work, but these folks cannot get a free pass and you need constant vigilance. Same old, same old.

—Mike Rothman

Wednesday, July 07, 2010

Incite 7/7/2010: The Mailbox Vigil

By Mike Rothman

The postman (or postwoman) doesn’t really get any love. Not any more. In the good old days, we’d always look forward to what goodies the little white box truck, with the steering wheel on the wrong side, would bring. Maybe it was a birthday card (with a check from Grandma). Or possibly a cool catalog. Or maybe even a letter from a friend.

Now that is one happy mailbox... Nowadays the only thing that comes in the mail for me is bills. Business checks go to Phoenix. The magazines to which I still stupidly subscribe aren’t very exciting. I’ve probably read the interesting articles on the Internet already. The mail is yet another casualty of that killjoy Internet thing.

But not during the summer. You see, we’ve sent XX1 (that’s our oldest daughter) off to sleepaway camp for a month. It’s her first year and she went to a camp in Pennsylvania, not knowing a soul. I’m amazed by her bravery in going away from home for the first time to a place she’s never been (besides a two-hour visit last summer), without any friends. It’s not bravery like walking into a bunker filled with armed adversaries or a burning house to save the cat, but her fearlessness amazes us. I couldn’t have done that at 9, so we are very proud.

But it’s also cold turkey from a communication standpoint. No phone calls, no emails, no texts. We know she’s alive because they post pictures. We know she is happy because she has a grin from ear to ear in every picture. So that is comforting, but after 9 years of incessant chatter, it’s a bit unsettling to not hear anything. The sound of silence is deafening. At least until the twins get up, that is.

We can send her letters and the camp has this online system, where we log into a portal and type in a message, and they print it out and give it to her each day. Old school, but convenient for us. That system is only one way. The only way we receive communication from her is through the mail. Which brings us back to our friend the postman. Now the Boss rushes to the mailbox every day to see whether XX1 has sent us a letter.

Most days we get nada. But we have gotten two postcards thus far (she’s been gone about 10 days), each in some kind of hieroglyphics not giving us much information at all. And we even gave her those letter templates that ask specific questions, like “What did you do today?” and “What is your favorite part of camp?” As frustrating as it is to get sparse communication, I know (from my camp experience) that it’s a good sign. The kids that write Tolstoian letters home are usually homesick or having a terrible time.

So I can be pragmatic about it and know that in another 3 weeks the chatter will start again and I’ll get to hear all the camp stories… 100 times. But the Boss will continue her mailbox vigil each day, hoping to get a glimpse of what our daughter is doing, how she’s feeling, and the great time she’s having. And I don’t say a word because that’s what Moms do.

– Mike.

Photo credits: “happy to receive mail” originally uploaded by Loving Earth

Recent Securosis Posts

  1. Know Your Adversary
  2. IBM gets a BigFix for Tivoli Endpoint Management
  3. Tokenization: The Business Justification
  4. Understanding and Selecting SIEM/LM: Advanced Features
  5. Understanding and Selecting SIEM/LM: Integration
  6. Understanding and Selecting SIEM/LM: Selection Process
  7. Friday Summary: July 1, 2010

Incite 4 U

  1. The ethics of malware creation – The folks at NSS Labs kind of started a crap storm when they dropped out of the AMTSO (anti-malware testing standards organization) and started publishing their results, which were not flattering to some members of the AMTSO. Then the debate migrated over to the ethics of creating malware for testing purposes. Ed at SecurityCurve does a good job of summarizing a lot of the discussion. To be clear, it’s a slippery slope and I can definitely see both sides of the discussion, especially within the context of the similar ethical quandary around developing new diseases. I come down on the side of doing whatever I can to really test my defenses, and that may mean coming up with interesting attacks. Obviously you don’t publish them in the wild and the payload needs to be inert, but to think that the bad guys aren’t going to figure it out eventually is naive. Unfortunately we can’t depend on everyone to act responsibly when they find something, so we have to assume that however the malware was originated, it will become public and weaponized. And that means we get back to basics. Right, react faster/better and contain the damage. – MR

  2. Mission to (Replace) MARS – When Cisco announced last year they weren’t supporting third party network and security devices on their MARS analysis platform, it was a clear indication that the product wasn’t long for the world. Of course, that started a feeding frenzy in the SIEM/Log Management world with all 25 vendors vying to get into Cisco’s good graces and become a preferred migration path, whatever that means. Finally Cisco has announced who won the beauty content by certifying 5 vendors who did some kind of interoperability testing, including ArcSight, RSA, LogLogic, NetForensics, and Splunk. Is this anything substantial? Probably not. But it does give sales reps something to talk about. And in a pretty undifferentiated market fighting for displacements, that isn’t a bad thing. – MR

  3. More goodies for your pen testing bag – Yes, we are big fans of hacking yourself, and that usually requires tools – open source, or commercial, or hybrid doesn’t matter. Sophisticated folks leverage memory analysis, reverse engineering apps and/or application scanners. The good news is there are no lack of new tools showing up to make the job of the pen tester easier. First hat tip goes to Darknet, who points out the inundator tool, which basically floods an IPS and makes it hard to detect the real attack. The folks at Help-Net also cover the XSSer tool, which is designed to find cross-site scripting vulnerabilities on web apps. Like any pen testing tool, these are useful for both good and evil, and you can be sure there are folks on the wrong side of the fence using them. That means at worst you should check them out and see what they find. Better that than be surprised when the bad guys find stuff. – MR

  4. How real is cyberwar? – Rich has gone on record calling hogwash on this whole cyberwar phenomenon, pointing out that unless you have blood running through the streets and lots of body bags, it’s not war. Then I read one of Bejtlich’s weekend missives (Cyberwar is real) and it gets me thinking. Of course, any time you bust out Sun Tzu, I need to take a step back and consider. I think the point is cyber attacks will clearly be a part of most wars moving forward. Not from the standpoint of directly hurting folks, but by crippling critical systems. Now we bomb airports and power stations and media installations in the initial phases of an attack to cripple the enemy. In the future, it would be a lot cheaper (though less reliable and shorter-lived) to pwn the air traffic control, shut down the power plants, and take over radio and TV broadcast to start a propaganda barrage. So yes, cyberwar is real, but it gets back to how we define cyberwar. – MR

—Mike Rothman

Friday, July 02, 2010

Understanding and Selecting SIEM/LM: Selection Process

By Mike Rothman

Now that you thoroughly understand the use cases and technology underpinning of SIEM and Log Management platforms, it’s time to flex your knowledge and actually buy one. As with most of our research at Securosis, we favor mapping out a very detailed process, and leaving you to decide which steps make sense in your situation. So we don’t expect every organization to go through every step in this process. Figure out what will work for your organization and do that.

Define Needs

Before you start looking at any tools you need to understand why you might need a SIEM/LM; how you plan on using it; and the business processes around management, policy creation, and incident handling. You can (and should) consult our descriptions of the use cases (Part 1 & Part 2) to really understand what problem you are trying to solve and why. If you don’t do this, your project is doomed to fail. And that’s all we’ll say about that.

  • Create a selection committee: Yeah, we hate the term ‘committee’ as well, but the reality is a decision to acquire SIEM – along with the business issues it is expected to address – comes from multiple groups. SIEM/LM touches not only the security team, but also any risk management, audit, compliance, and operational teams as well. So it’s best to get someone from each of these teams (to the degree they exist in your organization) on the committee. Basically you want to ensure that anyone who could say no, or subvert the selection at the 11th hour, is on board from the beginning. Yes, that involves playing the game, but if you want to get the process over the finish line, you’ll do what you need to.
  • Define the systems and platforms to monitor: Are you looking to monitor just security devices or also general-purpose network equipment, databases, applications, VMs and/or anything else? In this stage, detail the monitoring scope and the technical specifics of the platforms involved. You’ll use this list to determine technical requirements and prioritize features and platform support later in the selection process. Remember that your needs will grow over time and you may be limited by budget during the initial procurement, so break the list into a group of high priority things with immediate needs, and other groups of other data sources you may want to monitor later.
  • Determine security and/or compliance requirements: The committee really helps with collecting requirements, as well as mapping out reports and alerts. The implementation will involve some level of correlation, analysis, reporting, and integration– which needs to be defined ahead of time. Obviously that can and will change over time, but give this some thought because these requirements will drive your selection. You don’t need to buy a Rolls-Royce if a Nissan Sentra would solve your requirements. In this step map your security and compliance needs to the platforms and systems from the previous step, which helps determine everything from technical requirements to process workflow.
  • Outline process workflow, forensics, and reporting requirements: SIEM/LM workflow is highly dependent on use case. When used in a security context, the security team monitors and manages events, and will have an escalation process to verify attacks and remediate. When used to improve efficiency, the key is to leverage as many rules and alerts as possible, which is really a security team function. A forensics use case will involve the investigative/incident team. In most cases, audit, legal, and/or compliance will have at least some sort of reporting role, since compliance is typically the funding source for the project. Since different SIEM/LM platforms have different strengths and weaknesses in terms of management interfaces, reporting, forensics, and internal workflow, knowing your process before defining technical requirements can prevent headaches down the road.
  • Product versus managed service – Are you open to using a managed service for SIEM/LM? Do you have the internal resources/expertise to manage (and tune) the platform? Now is the time to decide whether a service is an option, since that impacts the rest of the selection process.

By the end of this phase you should have defined key stakeholders, convened a selection team, prioritized the systems to protect, determined protection requirements, and roughed out workflow needs.

Formalize Requirements

This phase can be performed by a smaller team working under the mandate of the selection committee. Here the generic needs determined in phase 1 are translated into specific technical features, and any additional requirements are considered. This is the time to come up with criteria for collection and aggregation, additional infrastructure integration, data storage/archival, deployment architecture, management and identity integration, and so on. You may need to dig into what information your devices provide to ensure you can collect the necessary data to reliably feed the SIEM platform. You can always refine these requirements as you proceed through the selection process and get a better feel for how the products work.

At the conclusion of this stage you develop a formal RFI (Request For Information) to release to vendors, and a rough RFP (Request For Proposals) that you’ll clean up and formally issue in the evaluation phase.

Evaluate Products

All the SIEM/LM vendors tell similar stories, which makes it difficult to cut through the marketing and figure out whether a product really meets your needs. The following steps should minimize your risk and help you feel confident in your final decision:

  • Issue the RFI: Larger organizations should issue an RFI though established channels and contact a few leading SIEM/LM vendors directly. If you’re a smaller organization, start by sending your RFI to a trusted VAR and email a few SIEM/LM vendors which seem appropriate for your organization.
  • Define the short list: Before bringing anyone in, match any materials from the vendor or other sources to your RFI and draft RFP. Your goal is to build a short list of 3 products which can satisfy most of your needs. You should also use outside research sources and product comparisons. Understand that you’ll likely need to compromise at some point in the process, as it’s unlikely any one vendor can meet every requirement.
  • Dog and Pony Show: Instead of generic presentations and demonstrations, ask the vendors to walk you through specific use cases that match your expected needs. This is critical, because the vendors are very good at showing cool eye candy and presenting the depth of their capabilities, while redefining your requirements based on their strengths. Don’t expect a full response to your draft RFP; these meetings are to help you better understand how each vendor can solve your specific use cases and to finalize your requirements.
  • Finalize and issue your RFP: At this point you should completely understand your specific requirements, and issue a final formal RFP.
  • Assess RFP responses and start proof of concept (PoC): Review the RFP results and drop anyone who doesn’t meet your hard requirements, such as platform support. Then bring in any remaining products for in-house testing. You’ll want to replicate your projected volume and data sources if at all possible. Build a few basic policies that match your use cases, then violate them, so you can get a feel for policy creation and workflow. And make sure to do some forensics work and reporting so you can understand the customization features. Understand that you need to devote resources to each PoC and stick to the use cases. The objective here is to put the product through its paces and make sure it meets your needs.

Selection and Deployment

  • Select, negotiate, and buy: Finish testing, take the results to the full selection committee, and begin negotiating with your top two choices, assuming more than one meets your needs. Yes, this takes more time, but you want to be able to walk away from one of the vendors if they won’t play ball with pricing, terms, and conditions.
  • Implementation planning: Congratulations, you’ve selected a product, navigated the procurement process, and made a sales rep happy. But now the next stage of work begins – as the end selection you need to plan the deployment. That means making sure of little details like lining up resources, getting access/credentials to devices, locking in an install schedule, and even the logistics of getting devices to the right locations. No matter how well you execute on the selection, unless you implement flawlessly and focus on quick wins and getting immediate value from the SIEM/LM platform, your project will be a failure.

I can hear your groans from small to medium sized business who look at this process and think this is a ridiculous amount of detail. Once again we want to stress that we created a granular selection process, but you can pare this down to meet your organization’s requirements. We wanted to make sure we captured all the gory details some organizations need to go through for a successful procurement. The process outlined is appropriate for a large enterprise but a little pruning can make it manageable for small groups. That’s the great thing about process: you can change it any way you see fit at no expense.

With that, we end our series on Understanding and Selecting a SIEM/Log Management platform. Hopefully the content will be useful as you proceed through your own selection process. As always, we appreciate all your comments on our research. We’ll be packaging up the entire series as a white paper over the next few weeks, so stay tuned for that.

Other Posts in Understanding and Selecting SIEM/LM

  1. Introduction
  2. Use Cases, Part 1
  3. Use Cases, part 2
  4. Business Justification
  5. Data Collection
  6. Aggregation, Normalization, and Enrichment
  7. Correlation and Alerting
  8. Reporting and Forensics
  9. Deployment Models
  10. Data Management
  11. Advanced Features
  12. Integration

—Mike Rothman

Friday Summary: July 1, 2010

By Rich

Earlier this week I was at the gym. I’d just finished a pretty tough workout and dropped down to the cafe area to grab one of those adult candy bars that tastes like cardboard and claims to give you muscles, longer life, and sexual prowess while climbing mountains. At least, that’s what I think they claim based on the pictures on the box. (And as a former mountain rescue professional, the technical logistics of the last claim aren’t worth the effort and potential injuries to sensitive bits).

Anyway, there was this woman in front of me, and her ordering process went like this:

  1. Ask for item.
  2. Ask for about 5-6 different options on said menu item, essentially replacing all ingredients.
  3. Look surprised when a number following a dollar sign appears on the little screen facing her on the cash register.
  4. Reach down to gym bag.
  5. Remove purse.
  6. Reach into purse.
  7. Remove wallet.
  8. Begin scrounging through change.
  9. See salad in cooler out of corner of eye.
  10. Say, “Oh! I didn’t see that!”
  11. Walk to cooler, leaving all stuff in front of register, with transaction in the middle.
  12. Fail to see or care about line behind her.

At this point, as she was rummaging through the pre-made salads, the guy behind the register looked at me, I looked at him, and we both subconsciously communicated our resignation as to the idiocy of the display in front of us. He moved over and unlocked the next register so I could buy my mountain-prowess-recovery bar, at which point the woman returned to the register and looked surprised that he was helping other (more decisive and prepared) customers.

One of my biggest pet peeves is people who lack awareness of the world around them. Which is most people, and probably explains my limited social life. But they probably hate judgmental sanctimonious jerks like me, so it all works out.

Just think about how many fewer security (and other) problems we’d have in the world if people would just swivel their damn heads and consider other people before making a turn? John Lennon should write a song about that or something.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Project Quant Posts

Research Reports and Presentations

Top News and Posts

Blog Comment of the Week

Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Michael O’Keefe, in response to The Open Source Database Security Project.

Adrian – thanks for the reply. Maybe risk assessment wasn’t the right word – I was thinking of some sort of market analysis to determine which open source databases to focus on. I was using selection criteria like “total number of installations” and “total size in bytes”, etc, but user groups is indeed a good criterion to use, since you are targeting an audience of actual ordinary users, not mega companies like facebook and twitter that should be managing the security themselves.

Maybe these types of distributed databases (bigtable, Cassandra) should be the focus of separate project? A quick search of Securosis shows one mention of bigtable, so while I don’t want to expand the scope of the current project, these “storage systems” do offer some interesting security problems. For example here Peter Fleischer from Google discusses the difficulty in complying with the EU Data Protection Directive:



Thursday, July 01, 2010

Understanding and Selecting SIEM/LM: Integration

By Adrian Lane

They say that no man is an island, and in the security space that’s very true. No system is, either – especially those tasked with some kind of security management. We get caught up in SIEM and Log Management platforms to suck in every piece of information they can to help with event correlation and analysis, but when it comes down to it security management is just one aspect of an enterprise’s management stack. SIEM/Log Management is only one discipline in the security management chain, and must feed some portion of its analysis to supporting systems. So clearly integration is key, both to getting value from SIEM/LM, and to making sure the rest of the organization is on board with buying and deploying the technology.

For a number of enterprise IT management systems it is important to integrate with the SIEM/Log Management platform, ranging from importing data sources, to sending alerts, even up to participating in an IT organization’s workflow. We’ve broken the integrations up into inbound (receiving data from another tool) and outbound (sending data/alerts to another tool).

Inbound integration

  1. Security management tools: We discussed this a bit when talking about data collection, regarding the importance of broadening the number of data sources for analysis and reporting. These systems include vulnerability management, configuration management, change detection, network behavioral analysis, file integrity monitoring, endpoint security consoles, etc. Typically integration with these systems is via custom connectors, and most SIEM/LM players have relationships with the big vendors in each space.
  2. Identity Management: Identity integration was discussed in the last post on advanced features and is another key system for providing data to the SIEM/LM platform. This can include user and group information (to streamline deployment and ongoing user management) from enterprise directory systems like Active Directory and LDAP, as well as provisioning and entitlement information to implement user activity monitoring. These integrations tend to be via custom connectors as well.

Because these inbound integrations tend to require custom connectors to get proper breadth and fidelity of data, it’s a good idea to learn a bit about each vendor’s partner program. Vendors use these programs to gain access to the engineering teams behind their data sources; but more importantly devote the resources to developing rules, policies, and reports to take advantage of the additional data.

Outbound integration

  1. IT GRC: Given that SIEM/Log Management gathers information useful to substantiate security controls for compliance purposes, clearly it would be helpful to be able to send that information to a broader IT GRC (Governance, Risk, and Compliance) platform that is presumably managing the compliance process at a higher level. So integration(s) with whatever IT GRC platform is in use within your organization (if any) is an important consideration for deciiding to acquire of SIEM/Log Management technology.
  2. Help Desk: The analysis performed within the SIEM/Log Management platform provides information about attacks in progress and usually requires some type of remediation action once an alert is validated. To streamline fixing these issues, it’s useful to be able to submit trouble tickets directly into the organization’s help desk system to close the loop. Some SIEM/Log Management platform have a built-in trouble ticket system, but we’ve found that capability is infrequently used, since all companies large enough to utilize SIEM/LM also have some kind of external help desk system. Look for the ability to not only send alerts (with sufficient detail to allow the operations team to quickly fix the issue), but also to receive information back when a ticket is closed, and to automatically close the alert within the SIEM platform.
  3. CMDB: Many enterprises have also embraced configuration management databases (CMDB) technology to track IT assets and ensure that configurations adhere to corporate policies. When trying to ensure changes are authorized, it’s helpful to be able to send indications of changes at the system and/or device level to the CMDB for confirmation.

Again, paying attention to each vendor’s partner program and announced relationships can yield valuable information about the frequency of true enterprise deployment, as large customers demand their vendors work together – often forcing some kind of integration. It also pays to as vendor references about their integration offerings – because issuing a press release does not mean the integration is functional, complete, or useful.

—Adrian Lane

IBM gets a BigFix for Tivoli Endpoint Management

By Mike Rothman

IBM continues to be aggressive with acquisitions, grabbing BigFix today for an undisclosed amount. Given BigFix’s aspirations (they were moving toward a public offering), I’m a bit surprised the economics weren’t disclosed, but it was likely a decent sized deal.

IBM and BigFix have a fairly long history of working together, and strategically this deal makes a lot of sense – especially given that IBM’s Tivoli systems management offerings weren’t very competitive on the endpoint. Once we got past the “Smarter Planet” branding hogwash on the analyst conference call, the leverage of IBM/BigFix became apparent. First, BigFix always positioned itself as a platform, driven by content and their Fixlets: applications that plug into the platform. You have to figure the IBM Global Services folks are drooling a bit to finally control an endpoint management integration platform – and the billable hours to build thousands more Fixlets.

BigFix as a stand-alone company wasn’t a long-term option. Small companies don’t get to play in the platform space, not over long periods of time anyway. But hats off to the BigFix folks – they focused on bringing specific use cases to market to show the power of their platform and knock down some big enterprise deployments. On the other hand, IBM is strictly a platform player, so the idea of Big Blue rolling out a comprehensive endpoint management offering is a no-brainer.

If anything, this solves a big operational problem for IBM, given their 500,000+ employees around the world (they plan to eat their own dog food with an enterprise-wide deployment) and millions of endpoints managed through their outsourcing business. From that perspective, this is very much like the HP/Opsware deal a few years ago. Yes, the deal gets justified by the big opportunity to sell the software, but the internal operational leverage of the technology is a big sweetener (and likely a deal size multiplier).

Additionally, IBM needed to make a move to bolster their security product capabilities, which are getting a bit long in the tooth. They’ve seen the former ISS erode to irrelevance; they moved the ISS products into the Tivoli group, but it’s too little too late. With BigFix they ger an opportunity to bring a far more strategic offering into the bag. Symantec has this capability through their Altiris acquisition and EMC/RSA bought ConfigureSoft a while back to get better endpoint management. You have to wonder if McAfee was a player in this deal, because they’ve got a big hole in their offering around endpoint management.

Customer Impact

If you are a BigFix customer, you likely have mixed feelings. Now you get to deal with IBM, which can be a nightmare. And if you have a very heterogenous environment, over time that is at risk. Of course, both IBM and BigFix will maintain their commitment to supporting a heterogeneous world, but you figure IBM platforms will get priority for new features and Fixlets. That’s the way of the world.

IBM outsourcing customers should be tickled. If you can get an endpoint change request through the gauntlet of change orders, contract (re)negotiations, and the other roadblocks IBM puts in your way, they’ll actually have a slick way to make the change. This also adds a number of other cool service offerings (energy management, endpoint remediation, asset management, etc.) that may actually add value to your services relationship. Imagine that.

Obviously you’ll see all the competition, both big (Symantec, RSA, HP) and little (LanDesk, Lumension, Shavlik) throw some FUD (fear, uncertainty, and doubt) balloons your way during the procurement process. Clearly there will be some impact to the product roadmap, and likely support, as the newly wealthy BigFixers move on and Tivoli starts putting their imprint on company operations. If anything, you should be able to use the FUD as more leverage to get additional pricing and T&C concessions when negotiating your purchase or renewal.


Like any other deal, most of the risk is in integration. Can IBM maintain the people and continue to drive the product to take advantage of the leverage they just paid for? I can say I was impressed with the three-phase integration plan IBM presented during the analyst call. The first phase is to get more exposure for BigFix within the customer base and good things should happen. After that, they integrate with the existing Tivoli stuff from a product and console standpoint.

Given the existing relationship, the integration issues are somewhat manageable. That doesn’t mean they don’t exist or that IBM won’t screw it up – just ask ISS about that. But given the work already done to drive integration (you’ve got to figure the deal has been in the works for a while) and the existing partnership, they have done what they can to contain the risk.

Bottom Line

The only outstanding question is how much of a premium did BigFix cost? From almost every other standpoint the strategic rationale of this deal is strong and even the issues are not that big. This likely means other big Security/IT companies (think McAfee, BMC, Oracle, etc.) need to grab some real estate in the endpoint management space. So not only is this a good day for the folks at BigFix, but Shavlik, Lumension, and LanDesk (once their emancipation from Emerson goes through) are well positioned to be next.

—Mike Rothman