Now that you’ve revisited your important use cases, and derived a set of security monitoring requirements, it’s time to find the right fit among the dozens of alternatives. To wrap up this series we will bring you through a reasonably structured process to narrow down your short list, and then testing the surviving products. Once you’ve chosen the technical winner, you need to make the business side of things work – and it turns out the technical winner is not always the solution you end up buying.

The first rule of buying anything is that you are in charge of the process. You’ll have vendors who will want you to use their process, their RFP/RFP language, their PoC Guide, and their contract language. All that is good and fine… if you want to by their product. But more likely you want the best product to solve your problems, which means you need to be driving the process. Our procurement philosophy hinges on this.

What we have with security monitoring is a very crowded and noisy market. We have a set of incumbents from the SIEM space, and a set of new entrants wielding fancy math and analytics. Both groups have a set of base capabilities to address the key use cases: threat detection, forensics and response, and compliance automation. But differentiation occurs at the margins of these use cases, so that’s where you will be making your decision.

But no vendor is going to say, “We suck at X, but you should buy us because Y is what’s most important to you.” Even though they should. It’s up to you to figure out each vendor’s true strengths and weaknesses, and cross-reference them against your requirements. That’s why it’s critical to have a firm handle on your use cases and requirements before you start talking to vendors.

We divide vendor evaluation into two phases. First we will help you define a short list of potential replacements. Once you have the short list you will test one or two new platforms during a Proof of Concept (PoC) phase. It is time to do your homework. All of it. Even if you don’t feel like it.

The Short List

The goal at this point is to whittle the list down to 3-5 vendors who appear to meet your needs, based on the results of a market analysis. That usually includes sending out RFIs, talking to analysts (egads!), or using a reseller or managed service provider to assist. The next step is to get a better sense of those 3-5 companies and their products. Your main tool at this stage is the vendor briefing. The vendor brings in their sales folks and sales engineers (SEs) to tell you how their product is awesome and will solve every problem you have. And probably a bunch of problems you didn’t know you had too. But don’t sit through their standard pitch – you know what is important to you.

You need detailed answers to objectively evaluate any new platform. You don’t want a 30-slide PowerPoint walkthrough and generic demo. Make sure each challenger understands your expectations ahead of the meeting so they can bring the right folks. If they bring the wrong people cross them off. It’s as simple as that – it’s not like you have time to waste.

Based on the use cases you defined earlier in this process, have the vendor show you how their tool addresses each issue. This forces them to think about your problems rather than their scripted demo, and shows off capabilities which will be relevant to you. You don’t want to buy from the best presenter – identify the product that best meets your needs.

This type of meeting could be considered cruel and unusual punishment. But you need this level of detail before you commit to actually testing a product or service. Shame on you if you don’t ask every question to ensure you know everything you need. Don’t worry about making the SE uncomfortable – this is their job.

And don’t expect to get through a meeting like this in 30 minutes. You will likely need a half-day minimum to work through your key use cases. That’s why you will probably only bring 3-5 vendors in for these meetings. You will be spending days with each product during proof of concept, so try to disqualify products which won’t work before wasting even more effort on them. This initial meeting can be a painful investment of time – especially if you realize early that a vendor won’t make the cut – but it is worth doing anyway. You can thank us later.

The PoC

After you finish the ritual humiliation of every vendor sales team, and have figured out which products can meet your requirements, it’s time to get hands-on with the systems and run each through its paces for a couple days. The next step in the process, the Proof of Concept, is the most important – and vendors know that. This is where sales teams have a chance to win, so the tend bring their best and brightest. They raise doubts about competitors and highlight their own successes. They have phone numbers for customer references handy. But for now forget all that. You are running this show, and the PoC needs to follow your script – not theirs.

Given the different approaches represented by SIEM and security analytics vendors, you are best served by testing at least one of each. As you read through our recommended process, it will be hard to find time for more than a couple, but given your specific environment and adversaries, seeing which type best meets your requirements will help you pick the best platform for your needs.


Many security monitoring vendors have a standard testing process they run through, basically telling them what data to provide and what attacks to look for – sometimes even with their resources running their product. It’s like ordering off a price fixe menu. You pick a few key use cases, and then the SE delivers what you ordered. If the vendor does it correctly it looks like a well-rehearsed ballet, where each participant precisely executes their assigned task. Everything quick and painless – just like security, right?

Wrong! Security is messy. Vendors design PoC processes to highlight their strengths and hide their weaknesses. We know this from first-hand experience – we have built them for vendors in past lives. We repeat this because it’s that important. You need to work through your situation, not their scenario.

Before you start the PoC be clear about the evaluation criteria, based on your requirements and use cases from earlier in this process. Your criteria doesn’t need to be complicated. Your requirements should spell out the key capabilities you need, with a plan to further evaluate each challenger based on intangibles such as set-up/configuration, change management, customization, user experience/ease of use, etc.

One more thing. We recommend investing in screen capture technology. It is hard to remember exactly what each tool did and how – especially after you have worked a few unfamiliar tools through exactly the same paces. So capture as much video as you can of the user experience – it will come in very handy as you approach your decision point.

Without further ado, let’s jump in.


One advantage of testing security management products is that you can actually monitor production systems without worrying about blowing them up, taking them down, or adversely impacting anything. So do that. Pull the data you need to execute on the use case. The point is to run things according to your needs, with your data, alerting on your policies. You will also want to configure a custom data source or two and integrate with your directory store to see how that works.

If compliance is your key requirement use PCI as an example. Start pulling data from your protected network segment. Pump that data through the PCI reporting process. Is the data correct and useful to everybody interested? Are the reports comprehensive? Will you need to customize them for any reason? How easy is that? You need to answer these kinds of questions during PoC.

Pay attention to visualization and user interface. Security systems are not only used by security professionals. A configurable UI makes it easier for a wider audience to contribute to and benefit from the tool. Configure some dashboards and see the results. Mess around with reports a bit. Tighten alert thresholds. Does the notification system work? Will alerts work in a timely fashion at enterprise volumes? Is the information in the dashboards and reports useful? These are all things to check as part of the test.

Run a Red Team

The next step is to see how the tool runs under fire. This is particularly critical for analytics-based solutions, whose claim to fame is that they can find unknown attacks. Well, then, run an unknown attack against yourself. Clearly attacking production systems would make you unpopular with ops folks, so set up a lab environment. Virtual environments are perfect for this – use the same base images for each vendor. The situation should be as realistic as possible.

Have attackers breach test systems with attack tools. Have your defenders try to figure out what is going on as it’s happening. Does the system alert as it should? Will you need to heavily customize rules? Can you identify the nature of attacks quickly? Does their super-duper forensic drill-down give you the view you need? The clock is ticking – how easy is it to use the system to search for clues?

Obviously this isn’t a real incident, but you need a feel for how the system performs in action. If an attacker is in your systems, will you find them? In time to stop or catch them? Once you know attackers are in, can you tell what they are doing? A red team exercise as part of the PoC will help determine that.

Knowing a tool will hold up in the heat of battle goes a long way toward giving security and operations teams confidence when they go live. In terms of scalability, keep in mind that you cannot fully test scalability during PoC, so focus on the stuff you can fully test. That’s the user experience, and there is no better way to distill out the effectiveness of a tool than to use it during an attack.

The Postmortem

At the end of the test, evaluate both successes and failures of the PoC in terms of your use cases and requirements. When you finish a red team exercise you should have a bunch of data which nicely illustrates what the attack team did – and perhaps what the defense team didn’t do as well as they could have. This is a learning experience for everyone, and real attack scenarios nicely illustrate the particular value of each platform.

Your team should grade each candidate while memory is fresh and perceptions are raw. After spending a week or two with another product they won’t remember what they liked and didn’t about earlier ones – another reason screen grabs are handy.

Lather, Rinse, Repeat

You will probably test more than one product or service, so you get to do it all again, and make sure to use the same scenarios for each. Consistency helps make the testing process fair and comparisons more meaningful.

Now you have all the information you need to make a decision, so it is time to figure out what to do and to substantiate your choice for your internal sales process. You can use the details of the PoC and screen capture videos you collected for each competitor.


The end goal is a recommendation, so you need to document what you think and then present it to secure funding. You may not always be in the room when decisions are made, so your documentation must clearly articulate your reasons. We normally structure this artifact of the decision process as follows:

  • Requirements: Tell them what you need and who said you need it. Compliance and security requirements come from different groups, so make sure to reference the folks driving those use cases.
  • Coverage: What works and doesn’t with the desired solution within the context of your requirements, both now and as they evolve. Make sure it’s clear that your choice meets the requirements you just laid out.
  • Competition: Which other vendors did you disqualify and why? What did you learn during Proof of Concept? Are any of the competitors viable? What compromises would need to be made if another product was selected?
  • Cost Estimate: What would it cost to move to the new platform? How much is capital expense and what fraction is operational? What kind of investment in professional services would be required?
  • Migration Plan: What will the migration look like? How long will it take? Will migration disrupt any services? Will you be more exposed to attack, and if so for how long? You need all these answers before you pitch to the powers that be. Not a Gantt chart – that comes at the end – but enough to answer the tough questions.
  • Recommendation: Your entire document should be building to this point, where you put the best path down on paper. If it is a surprise to your audience, you did something wrong. This is about telling them what they already know, and making sure they have an opportunity to ask any remaining questions.

Now you have the thumbs-up from the internal team (we hope!). You need to negotiate with the vendor and get the deal done. We won’t get into the specifics of negotiating – you likely have people to do that – but understand that you can use time-honored tactics: such as waiting until the end of the quarter, playing one vendor against another (if either could meet your requirements), and possibly asking for non-cash add-ons (such as professional services or product modules).

Once all this is done you may need to go back for final sign-off. This is when your process is most vulnerable. Negotiations around price and services can be challenging. But at least in negotiating with a vendor you know who the adversary is. Inevitably, internal resistance will appear (or reappear), and you may never see it coming. Especially if a losing vendor sells a lot of other stuff to your company and has friends in high places.

This entire process has prepared you to deal with these obstacles, so just work through the logic of your decision once more, making clear how your recommendation is best for the organization, and squash the resistance. Expect the losing vendor to go over your head – vendors don’t go quietly into the long night once they lose a deal.


If you made it this far, take stock of the journey. At this stage in the buying process it feels like you have gone through a lot of work, but haven’t even started the real work yet. On real security stuff, anyway. Let’s step back a moment to focus on what’s important: getting stuff done as simply and easily as possible. The migration process is not easy, because you need to maintain service levels without exposing your organization to additional risk. This may involve supporting two systems for a short while, or using two systems in a hybrid architecture – perhaps indefinitely.

Either way, when you put your head on the block to choose a new platform, the migration needs to go smoothly. There is no such thing as a ‘flash’ cutover. We recommend you start deploying the new monitoring platform long before you get rid of the old. At best you will deprecate portions of the older system after newer replacement capabilities are online, but you will likely want the older system as a fallback until all new functions have been vetted and tuned. We learned the importance of this staging process the hard way. Ignore it at your peril – security monitoring supports several key business functions, so if things go haywire, it’s a bad day.

Fast forward a few months and the challenging process of actually buying will be in the rear-view, and you will getting to work and actually using your new tool. But don’t get too comfortable. Just as we saw a new group of security analytics players challenge incumbent SIEM vendors, inevitably there will be something else newer and shinier in the market.

In 2-3 years, after that new-car smell wears off the platform you just bought, you’ll be questioning whether today’s choice remains correct. You may be building another Team of Rivals, which will eventually give way to the next strategic platform. Seems silly. right? But there is no use resisting – it’s all part of the game.