Security Management 2.0: Vendor Evaluation—Culling the Short List
So far we have discussed a bit of how security management platforms have evolved, how your requirements have changed since you first deployed the platform, and how you need to evaluate your current platform (Part 1, Part 2) in light of both. Now it’s time to get into the meat of the decision process by defining your selection criteria for your Security Management 2.0 platform. Much of defining your evaluation criteria is wading objectively through vendor hyperbole. As technology markets mature (and SIEM is pretty mature), the capabilities of each offering tend to get pretty close. The messaging is very similar and it’s increasingly hard to differentiate one platform from another. Given your unhappiness with your current platform (or you wouldn’t be reading this, right?), it’s important to distill down what a platform does and what it doesn’t, as early in the process as you can. We will look at the vendor evaluation process in two phases. In this post, we’ll help you define a short list of potential replacements. Maybe you use a formal RFP/RFI to cull the 25 companies in the space to 3-5, maybe you don’t. You’ll see soon enough why you can’t run 10 vendors through even the first stage of this process. At the conclusion of the short list exercise, you’ll need to test one or two new platforms during a Proof of Concept, which we’ll detail in the next post. We don’t recommend you skip directly to the test, by the way. Each platform has strengths and weaknesses and just because a vendor happens to be in the right portion of a magical chart doesn’t mean it’s the right choice for you. Do your homework. All of it. Even if you don’t feel like it. Defining the Short List A few aspects of the selection criteria should be evaluated with a broader group of challengers. Think 3-5 at this point. You need to prioritize each of these areas based on your requirements. That’s why you spent so much time earlier defining and gaining consensus on what’s important for replacing your platform. Your main tool in this stage of the process is what we kindly call the dog and pony show. That’s when the vendor brings in their sales folks and sales engineers (SEs) to tell you how their product is awesome and will solve every problem you have. Of course, what they won’t be ready for (unless they read this post as well) is the ‘intensity’ of your KGB-style interrogation techniques. Basically, you know what’s important to you and you need confidence that any vendor passing through this gauntlet (and moving on to the PoC) will be able to meet your requirements. Let’s talk a bit about tactics to get the answers you need, based on the areas where your existing product is lacking (from the platform evaluation). You need to detailed answers during these meetings. This meeting is not a 30 slide PowerPoint and a generic demo. Make sure the challenger understands those expectations ahead of the meeting, so they have right folks in the room. If they bring the wrong people, cross them off the short list. It’s as simple as that – it’s not like you have a lot of time to waste, right? Security: We recommend you put together a scenario as a case study for each challenger. You want to understand how they’d detect an attack based on the information sources they gather and how they configure their rule sets and alerts. Make it detailed, but not totally ridiculous. So basically, dumb down your existing environment a bit and run them through an attack scenario you’ve seen recently. This will be a good exercise for seeing how the data they collect is used to solve a major security management platform major use case, detecting an emerging attack quickly. Have the SE walk you through setting up or customizing a rule. Use your own scenario to reduce the likelihood of the SE having a pre-built rule. You want to really understand how the rules work, because you will spend a lot of time configuring your rules. Compliance: Next, you need to understand what level of automation exists for compliance purposes. Ask the SE to show you the process of preparing for an audit. And no, showing you a list of 2,000 reports, most called PCI X.X is not sufficient. Ask them to produce samples for a handful of critical reports you rely upon to see how closely they hit the mark – you can see the difference between reports developed by an engineer and those created by an auditor. You need to understand where the data is coming from, and hopefully they will have a demo data set to show you a populated report. The last thing you want to learn is that their reports don’t pull from the right data sources two days before an audit. Integration: In this part of the discussion delve into how the product integrates with your existing IT stack. How does the platform pull data from your identity management system? CMDB? What about data collection? Are the connectors pre-built and maintained by the vendor? What about custom connectors? Is there a SDK available, or does it require a bunch of professional services? Forensics: Vendors throw around the term root cause analysis frequently, while rarely substantiating how their tool is used to work through an incident. Have the SE literally walk you through an investigation based on their sample data set. Yes, you’ll test this yourself later, but get a feel for what tools they have built in and how they can be used by the SE who should really know how to use the system. Scalability: If your biggest issue is a requirement for more power, then you’ll want to know (at a very granular level) how each challenger solves the problem. Dive into their data model and their deployment architectures, and have them tell stories about their biggest implementations. If scalability is a