Securosis

Research

Security Management 2.5: Replacing Your SIEM Yet? [New Paper]

Security Information and Event Management (SIEM) systems create a lot of controversy among security folks – they are a pain but it is an instrumental technology for security, compliance, and operations management. The problem is – given the rapid evolution of SIEM/Log Management over the past 4-5 years – that product obsolescence is a genuine issue. The problems caused by products that have failed to keep pace with technical evolution and customer requirements cannot be trivialized. This pain becomes more acute when a SIEM fails to collect the essential information during an incident – and even worse when it completely fails to detect a threat. Customers spend significant resources (both time and money) on caring for and feeding their SIEM. If they don’t feel the value is commensurate with their investment they will move on – searching for better, easier, and faster products. It is only realistic for these customers to start questioning whether their incumbent offerings make sense moving forward. We are happy to announce the launch our latest research paper: Security Management 2.5. We discuss changing customer demands, and how vendors are altering their platforms to address them. We then provide a detailed process to help determine whether you need to swap providers, and if so how. We would like to thank IBM and McAfee for licensing this research. Support from the community enables us to bring you our Totally Transparent Research free of charge, so we are happy IBM and McAfee chose to license this report. You can get the full paper: Security Management 2.5: Replacing Your SIEM Yet? Share:

Share:
Read Post

Friday Summary: February 14, 2014

Bacon as a yardstick: This year will see the 6th annual Securoris Disaster Recovery Breakfast, and I am measuring attendance in required bacon reserves. Jillian’s at the Metreon has been a more than gracious host each year for the event. But when we order food we (now) do it in increments of 50 people. At the moment we are ordering bacon for 250, and we might need to bump that up! We have come a long way since 2009, when we had about 35 close friends show up, but we are overjoyed that so many friends and associates will turn out. Regardless, we expect a quiet, low-key affair. It has always been our favorite event of the week because of that. Bring your tired, your hungry, your hungover, or just plain conference-weary self over and say ‘Howdy’. There will be bacon, good company, and various OTC pharmaceuticals to cure what ills you. Note from Rich: Actually we had a solid 100 or so that first year. I know – I had to pay the bill solo. Big Spin: More and more firms are spinning their visions of big data, which in turn makes most IT folks’ heads spin. These visions look fine within a constrained field of view, but the problem is what is left unsaid: essentially the technologies and services you will need but which are not offered – and vendors don’t talking about them. Worse, you have to filter through non-standard terminology deployed to support vendor spin – so it’s extremely difficult to compare apples against apples. You cannot take vendor big data solutions at face value – at this early stage you need to dig in a bit. But to ask the right questions, you need to know what you probably don’t yet understand. So the vendor product demystification process begins with translating their materials out of vendor-speak. Then you can determine whether what they offer does what you need, and finally – and most importantly – identify the areas they are not discussing, so you can discover their deficiencies. Is this a pain in the ass? You betcha! It’s tough for us – and we do this all day, for a living. So if you are just learning about big data, I urge you to look at the essential characteristics defined in the introduction to our Securing Big Data Clusters paper – it is a handy tool to differentiate big data from big iron, or just big BS. Laying in wait. I have stated before that we will soon stop calling it “big data”, and instead just call these platforms “modular databases”. Most new application development projects do not start with a relational repository – instead people now use some form of NoSQL. Which should be very troubling to any company that derives a large portion of its revenue from database sales. Is it odd that none of the big three database vendors has developed a big data platform (a real one – not a make believe version)? Not at all. Why jump in this early when developers are still trying to decide whether Couch or Riak or Hadoop or Cassandra or something else entirely is best for their projects? So do the big three database vendors endorse big data? Absolutely. To varying degrees they encourage customer adoption, with tools to support integration with big data – usually Hadoop. It is only smart to play it slow, lying in wait like a giant python, and later swallow the providers that win out in the big data space. Until then you will see integration and management tools, but very tepid development of NoSQL platforms from big relational players. Yes, I expect hate mail on this from vendors, so feel free to chime in. Hunter or hunted? One the Securosis internal chat board we were talking about open security job positions around the industry. Some are very high-profile meat grinders that we wouldn’t touch with asbestos gloves and a 20’ pole. Some we recommend to friends with substantial warnings about mental health and marital status. Others not at all. Invariably our discussion turned to the best job you never took: jobs that sounded great until you go there – firms often do a great job of hiding dirty laundry until after you come on board. Certain positions provide a learning curve for a company: whoever takes the job, not matter how good, fails miserably. Only after the post-mortem can the company figure out what it needs and how to structure the role to work out. Our advice: be careful and do your homework. Security roles are much more difficult than, say, programmer or generic IT staffer. Consult your network of friends, seek out former employees, and look at the firm’s overall financial health for some obvious indicators. Who held the job before you and what happened? And if you get a chance to see Mike Rothman present “A day in the life of a CISO”, check it out – he captures the common pitfalls in a way that will make you laugh – or cry, depending on where you work. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich quoted in “Building the security bridge to the Millennials”. Favorite Securosis Posts Dave Lewis: After-School Special: It’s Time We Talked – about Big Data Security. David Mortman: RSA Conference Guide 2014 Watch List: DevOps. Adrian Lane: RSA Conference Guide 2014 Watch List: DevOps. Just a great post. Mike Rothman: RSA Conference Guide 2014 Watch List: DevOps. Sometimes it’s helpful to look into the crystal ball and think a bit about what’s coming. You won’t see that much at the RSAC, but we can at least give you some food for thought. Rich: Bit9 Bets on (Carbon) Black. Mike always does the best deal posts in the industry. Other Securosis Posts Security Management 2.5: Replacing Your SIEM Yet? [New Paper]. Advanced Endpoint and Server Protection: Prevention. Incite 2/12/2014: Kindling. Firestarter: Mass Media Abuse. RSA Conference Guide 2014 Deep Dive: Network Security. RSA Conference Guide 2014 Key Theme: Cloud Everything. RSA Conference Guide 2014 Key Theme: Crypto and Data Protection. RSA

Share:
Read Post

RSA Conference Guide 2014 Key Theme: Crypto and Data Protection

You didn’t think you would need to wait long for a Snowden reference, did you? Well, you know we Securosis guys like to keep you in suspense. But without further ado, it’s time. Snowden time! CryptoZoology The biggest noisemaker at RSA this year – besides Rothman – will be everyone talking about the NSA revelations. Everyone with a bully pulpit (which is basically everyone) will be yelling about how the NSA is all up in our stuff. Self-aggrandizing security pundits will be preaching about how RSA took a bribe, celebrating their disgust by speaking in the hallways and at opportunistic splinter conferences, instead of at the RSA podia. DLP, eDiscovery, and masking vendors will be touting their solutions to the “insider threat” with Snowden impersonators (as discussed in APT0). Old-school security people will be mumbling quietly in the corners of the Tonga Room, clutching drinks with umbrellas in them, saying “I told you so!” One group who will be very, very quiet during the show: encryption vendors. They will not be talking about this! Why? Because they really can’t prove their stuff is not compromised, and in the absence of proof, they have already been convicted in the security star chamber. Neither Bruce Schneier nor Ron Rivest will be pulling proofs of non-tampering out of magic math hats. And even if they could, the security industry machine isn’t interested. There is too much FUD to throw. What’s worse is that encryption vendors almost universally look to NIST to validate the efficacy of their solutions – now that NIST is widely regarded as a pawn of the NSA, who can provide assurance? I feel sorry for the encryption guys – it will be a witch hunt! The real takeaway here is that IT is – for the first time – questioning the foundational technologies data security has been built upon. And it has been a long time coming! Once we get past Snowden and NSA hype, the industry won’t throw the baby out with the bathwater, but will continue to use encryption – now with contingency plans, just in case. Smart vendors should be telling customers how to adjust or swap algorithms if and when parts of the crypto ecosystem becomes suspect. These organizations should also be applying disaster recovery techniques to encryption solutions, just in case. Share:

Share:
Read Post

RSA Conference Guide 2014 Key Theme: Big Data Security

As we continue posting our key themes for the 2014 RSA Conference, let’s dig a bit into big bata, because you won’t be hearing anything about it at the show… After-School Special: It’s Time We Talked – about Big Data Security The RSA Conference floor will be packed full of vendors talking about the need to secure big data clusters, and how the vast stores of sensitive information in these databases are at risk. The only thing that can challenge “data velocity” into a Hadoop cluster is the velocity at which FUD comes out the mouth of a sales droid. Sure, potential customers will listen intently to this hot new trend because it’s shiny and totally new. But they won’t actually be doing anything about it. To recycle an overused analogy, big data security is a little like teen sex: lots of people are talking about it, but not that many are actually doing it. Don’t get us wrong – companies really are using big data for all sorts of really cool use cases including analyzing supply chains, looking for signs of life in space, fraud analytics, monitoring global oil production facilities, and even monitoring the metadata of the entire US population. Big data works! And it provides advanced analysis capabilities at incredibly low cost. But rather than wait for your IT department to navigate their compliance mandates and budgetary approval cycles, your business users slipped out the back door because they have a hot date with big data in the cloud. Regardless of whether those users understand the risks, they are pressing forward. This is where your internal compliance teams start to sound like your parents telling you to be careful and not to go out without your raincoat on. What users hear is that the audit/compliance teams don’t want them to have any fun because it’s dangerous. The security industry is no better, and the big data security FUD is sure to come across like those grainy old public service films you were forced to watch in high school about something-something-danger-something… and that’s when you fell asleep. We are still very early in our romance with big data, and your customers (yes, those pesky business users) don’t want to hear about breaches or discuss information governance as they explore this new area of information management. Share:

Share:
Read Post

Friday Summary: January 31, 2014

During my total and complete laptop fail for this week’s Firestarter, I was trying to make the point that large software projects have a considerably higher probability of failure. It is no surprise that many government IT projects are ‘failures’ – they are normally managed as ginormous projects with many competing requirements. It worked or the Apollo missions so governments doggedly cling to that validated model. But in the commercial environment Agile is having a huge and positive impact on software development. Coincidentally, this week Jim Bird discussed the findings of the 2013 Chaos Report. In a nutshell the topline was “More projects are succeeding (39% in 2012, up from 29% in 2004), mostly because projects are getting smaller”. But Jim points out that you cannot conjure up an Agile development program like the Wonder Twins activate their superhero powers – Agile development processes are one aspect, but program management across multiple Agile efforts is another thing entirely. A lot of thought and work has gone into this over the last few years, and things like the Scaled Agile Framework can help. Still, most government projects I have seen employ no Agile techniques. There is a huge body of knowledge out on how to get these things done, and industry leads the public sector by a wide margin. I used to get a lot of spam with hot stock tips. I was assured a penny stock was about to shoot through the roof because a patent was approved, and got plenty of dire warnings about pharmaceuticals firm failing clinical trials. Of course the info was bogus, but Mr. Market, the psycho he is, actually reacted. Anonymous bloggers could manipulate the market simply by leaving comments on blogs and message boards, offering no evidence but generating huge reactions. If you are a day trader this can pretty much ensure you will make money. This whole RSA deal, where they allegedly took $10M from the NSA to compromise security products, has the same feel – it sounds believable, but we are seeing a huge backlash without any sort of evidence. It feels like market manipulation. Could RSA have been bribed? Absolutely. Would the NSA conduct this business without leaving a paper trail? Probably. But would I buy or sell stocks based on spam, anonymous blogs posts, or my barber’s recommendation? No. That is not an appropriate response. Nor will I grandstand in the media or start a new security conference, trying to hurt RSA, because of what their software division did or did not do years ago. That would also be inappropriate. Pulling the ECC routines in question? Providing a competing solution? Providing my firm some “disaster recovery” options in case of compromised crypto/PRNG routines? Those are all more appropriate responses. For those of you who asked about my upcoming research calendar, I am excited about some projects that will commence in a couple weeks and complete in Q2. First up will be an update to the Big Data Security paper from mid-2012. SOOOO much has happened in the last 6-9 months that a lot it is obsolete, so I will be updating it. Gunnar and I are working on a project we call “Rebel Federation”, which is how we describe the assembly of an identity management solution based on best of breed components, rather than a single suite / single vendor stack. We will go through motivations, how to assemble, and how to mitigate some of the risks. And given the burst of tokenization inquiries over the past 60 days, I will be writing about that as well. If you have questions, please keep them coming – I have not yet decided on an outline. And finally, before RSA, I promise to launch the Security Analytics with Big Data paper. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian quoted on Database Denial of Service. David Mortman and Adrian Lane will be presenting at Secure360. Mike and JJ podcast about the Neuro-Hacking talk at RSA. Favorite Securosis Posts Mike Rothman: The Future of Information Security. Rich is our big thinker (when he gets enough sleep, at least) and I am fired up to read this series about how we need to start thinking about information security moving forward. The technology foundation under us is changing dramatically, and that won’t leave much of current security standing in the end. Either get ahead of it now, or clean up the rubble of your security program. Adrian Lane: Southern Snowpocalypse. It snowed here in Phoenix last year, but nothing like it did in ATL yesterday. It does not matter where snow hits – if it is at the wrong time and a city is unprepared, it’s crippling. Other Securosis Posts Firestarter: Government Influence. Leveraging Threat Intelligence in Security Monitoring: Benefiting from the Misfortune of Others. Summary: Mmm. Beer. Favorite Outside Posts Jamie Arlen: James at ShmooCon 2014. Totally self-serving, I know, but awesome none the less. Gunnar: NFC and BLE are friends. Adrian Lane: Pharmaceutical IT chief melds five cloud security companies to bolt down resource access. This is my first NetworkWorld fave – usually I ridicule their stuff – but this is a good description of a trend we have been seeing as well. And you need some guts to walk this path. Mike Rothman: Volunteer at HacKid! If you’re on the west coast and have kids, you should be at HacKid, April 19-20 in San Jose. Plenty of opportunities to volunteer. I’ll be there (with my 10 year old twins), and I think Rich is planning to attend as well. See you there! Research Reports and Presentations Eliminate Surprises with Security Assurance and Testing. What CISOs Need to Know about Cloud Computing. Defending Against Application Denial of Service Attacks. Executive Guide to Pragmatic Network Security Management. Security Awareness Training Evolution. Firewall Management Essentials. A Practical Example of Software Defined Security. Continuous Security Monitoring. API Gateways: Where Security Enables Innovation. Identity and Access Management for Cloud Services. Top News and Posts Software [in]security and scaling automated code review. Just Let Me Fling Birds at Pigs Already!

Share:
Read Post

Friday Summary: January 17, 2014

Today I am going to write about tokenization. Four separate people have sent me a questions about tokenization in the last week. As a security paranoiac I figured there was some kind of conspiracy or social engineering going on – this whole NSA/Snowden/RSA thingy has me spooked. But after I calmed down and realized that these are ‘random’ events, I recognized that the questions are good and relevant to a wider audience, so I will answer a couple of them here on the blog. In no particular order: “What is throttling tokenization?” and “How common is the ‘PCI tokenization throttle function’ in tokenization products and services?” I first heard about “throttling tokenization systems” and “rate limiting functions” from the card brands as a secondary security service. As I understand the intention, it is to provide, in case a payment gateway is compromised or an attacker gains access to a token service, a failsafe so someone couldn’t siphon off the entire token database. My assumption was that this rate monitor/throttle would only be provided on de-tokenization requests or vault inquiries that return cardholder information. Maybe that’s smart because you’d have a built-in failsafe to limit information leakage. Part of me thinks this is more misguided guidance, as the rate limiting feature does not appear to be in response to any reasonable threat model – de-tokenization requests should be locked down and not available through general APIs in the first place!!! Perhaps I am not clever enough to come up with a compromise that would warrant such a response, but everything I can think of would (should) be handled in a different manner. But still, as I understand from conversations with people who are building tokenization platforms, the throttling functions are a) a DDoS protection and b) a defense against someone who figures out how to request all tokens in a database. And is it common? Not so far as I know – I don’t know of any token service or product that builds this in; instead the function is provided by other fraud and threat analytics at the network and application layers. Honestly, I don’t have inside information on this topic, and one of the people who asked this question should have had better information than I do. Do you still write about tokenization? Yes. Are you aware of any guidance in use of vault-less solutions? Are there any proof points or third-party validations of their security? For the audience, vault-less tokenization solutions do not store a database of generated tokens – they use a mathematical formula to generate them, so no need to store that which can be easily derived. And to answer the question, No, I am not aware of any. That does not mean no third-party validation exists, but I don’t follow these sorts of proofs closely. What’s more, because the basic design of these solutions closely resemble a one-time pad or similar, conceptually they are very secure. The proof is always in the implementation, so if you need this type of validation have your vendor provide a third-party validation by people qualified for that type of analysis. Why is “token distinguishability” discussed as a best practice? What is it and which vendors provide it? Because PCI auditors need a way to determine whether a database is full of real credit cards or just tokens. This is a hard problem – tokens can and should be very close to the real thing. The goal for tokens is to make them as real as possible so you can use them in payment systems, but they will not be accepted as actual payment instruments. All the vendors potentially do this. I am unaware of any vendor offering a tool to differentiate real vs. tokenized values, but hope some vendors will step forward to help out. Have you seen a copy of the tokenization framework Visa/Mastercard/etc.? announced a few months back? No. As far as I know that framework was never published, and my requests for copies were met with complete and total silence. I did get snippets of information from half a dozen different people in product management or development roles – off the record – at Visa and Mastercard. It appears their intention was to define a tokenization platform that could be used across all merchants, acquirers, issuers, and related third parties. But this would be a platform offered by the brands to make tokenization an industry standard. On a side note I really did think, from the way the PR announcement was phrased, that the card brands were shooting for a cloud identity platform to issue transaction tokens after a user self-identified to the brands. It looked like they wanted a one-to-one relationship with the buyer to disintermediate merchants out of the payment card relationship. That could be a very slick cloud services play, but apparently I was on drugs – according to my contacts there is no such effort. And don’t forget to RSVP for the 6th annual (really, the 6th? How time flies ….) Securosis Disaster Recovery Breakfast during the RSA Conference. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian quoted in DBaaS article. Rich talks with Dennis Fisher about the Target breach (podcast). Favorite Securosis Posts Rich: Security Management 2.5: Negotiation. I hate negotiating. Some people live for it, but I can’t be bothered. On that note, I need to go buy a car. David Mortman: Firestarter: Crisis Communications. Mike Rothman: Security Management 2.5: Negotiation. This is a great post. A solid plan for buying any big-ticket item. Adrian Lane: Apple’s Very Different BYOD Philosophy. Nobody else has covered this to my knowledge, but Rich describes it very clearly. Enterprise-owned devices are simpler, but iOS almost seamlessly handles the division between enterprise and user domains on BYOD gear. Not very dramatic but simple and effective. Other Securosis Posts A Very Telling Antivirus Metric. Reducing Attack Surface with Application Control: Use Cases and Selection Criteria. Incite 1/15/2014: Declutter. Advanced Endpoint and Server Protection:

Share:
Read Post

Security Management 2.5: Migration

If you made it this far we know your old platform is akin to an old junker automobile: every day you drive to work in a noisy, uncomfortable, costly vehicle that may or may not get you where you need to be, and every time you turn around you’re spending more money to fix something. With cars figuring out what you want, shopping, getting financing, and then dealing with car sales people is no picnic either, but in the end you do it to make you life a bit easier and yourself more comfortable. It is important to remember this because, at this stage of SIEM replacement, it feels like we have gone through a lot of work just so we can do more work to roll out the new platform. Let’s step back for a moment and focus on what’s important; getting stuff done as simply and easily as possible. Now that you are moving to something else, how do you get there? The migration process is not easy, and it takes effort to move from from the incumbent to the new platform. We have outlined a disciplined and objective process to determine whether it is worth moving to a new security management platform. Now we will outline a process for implementing the new platform and transitioning from the incumbent to the new SIEM. You need to implement, and migrate your existing environment to the new platform, while maintaining service levels, and without exposing your organization to additional risk. This may involve supporting two systems for a short while. Or in a hybrid architecture using two systems indefinitely. Either way, when a customer puts his/her head on the block to select a new platform, the migration needs to go smoothly. There is no such thing as a ‘flash’ cutover. We recommend you start deploying the new SIEM long before you get rid of the old. At best, you will deprecate portions of the older system after newer replacement capabilities are online, but you will likely want the older system as a fallback until all new functions have been vetted and tuned. We have learned the importance of this staging process the hard way. Ignore it at your own peril, keeping in mind that your security management platform supports several key business functions. Plan We offer a migration plan for moving to the new security management platform. It covers data collection as well as migrating/reviewing policies, reports, and deployment architectures. We break the migration process into two phases: planning and implementation. Your plan needs to be very clear and specific about when things get installed, how data gets migrated, when you cut over from old systems to new, and who performs the work. The Planning step leverages much of the work done up to this point in evaluating replacement options – you just need to adapt it for migration. Review: Go back through the documents you created earlier. First consider your platform evaluation documents, which will help you understand what the current system provides and key deficiencies to address. These documents become the priority list for the migration effort, the basis for your migration task list. Next leverage what you learned during the PoC. To evaluate your new security management platform provider you conducted a mini deployment. Use what you learned from that exercise – particularly what worked and didn’t – as input for subsequent planning, and address the issues you identified. Focus on incremental success: What do you install first? Do you work top down or bottom up? Will you keep both systems operational throughout the entire migration, or shut down portions of the old as each node migrates? We recommend using your deployment model as a guide. You can learn more about these models by checking out Understanding and Selecting a SIEM. When using a mesh deployment model, it is often easiest to make sure a single node/location is fully functional before moving on to the next. With ring architectures it is generally best to get the central SIEM platform operational, and then gradually add nodes around it until you reach the scalability limit of the central node. Hierarchal models are best deployed top-down, with the central server first, followed by regional aggregation nodes in order of criticality, down to the collector level. Break the project up to establish incremental successes and avoid dead ends. Allocate resources: Who does the work? When will they do it? How long will it take to deploy the platform, data collectors, and/or log management support system(s)? This is also the time to engage professional services and enlist the new vendor’s assistance. The vendor presumably does these implementations all day long so they should have expertise at estimating these timelines. You may also want to engage them to perform some (or all) of the work in tandem with your staff, at least for the first few locations until you get the process down. Define the timeline: Estimate the time it will take to deploy the servers, install the collectors, and implement your policies. Include time for testing and verification. There is likely to be some ‘guesstimation’, but you have some reasonable metrics to plan from, from the PoC and prior experience with SIEM. You did document the PoC, right? Plan the project commencement date and publish to the team. Solicit feedback and adjust before commencing because you need shared accountability with the operations team(s) to make sure everyone has a vested interest in success. Preparation: We recommend you do as much work as possible before you begin migration, including construction of the rules and policies you will rely on to generate alerts and reports. Specify in advance any policies, reports, user accounts, data filters, backup schedules, data encryption, and related services you can. You already have a rule base so leverage it to get going. Of course you’ll tune things as you go, but why reinvent the wheel or rush unnecessarily? Keep in mind that you will always find something you failed to

Share:
Read Post

Security Management 2.5: Negotiation

You made your decision and kicked it up the food chain – now the fun begins. Well, fun for some people, anyway. For the first half of this discussion we will assume you have decided to move to a new platform and offer tactics for negotiating for a replacement platform. But some people decide not to move, using the possible switch for negotiating leverage. It is no bad thing to stay with your existing platform, so long as you have done the work to know it can meet your requirements. We are writing this paper for the people who keep telling us about their unhappiness, and how their evolving requirements have not been met. So after asking all the right questions, if the best answer is to stay put, that’s a less disruptive path anyway. Replacement tactics For now, though, let’s assume your current platform won’t get you there. Now your job is to get the best price for the new offering. Here are a few tips to leverage for the best deal: Time the buy: Yes, this is Negotiation 101. Wait until the end of the quarter and squeeze your sales rep for the best deal to get the PO in by the last day of the month. Sometimes it works, sometimes it doesn’t. But it’s worth trying. The rep may ask for your commitment that the deal will, in fact, get done that quarter. Make sure you can get it done if you pull this card. Tell the incumbent they lost the deal: Next get the incumbent involved. Once you put in a call letting them know you are going in a different direction, they will usually respond. Not always, but generally the incumbent will try to save the deal. And then you can go back to the challenger and tell them they need to do a bit better because you got this great offer from their entrenched competition. Just like when buying a car, to use this tactic you must be willing to walk away from the challenger and stay with the incumbent. Look at non-cash add-ons: Sometimes the challenger can’t discount any more. But you can ask for additional professional services, modules, boxes, licenses, whatever. With new data analytics, maybe your team lacks some in-house skills for a successful transition – the vendor can help. Remember, the incremental cost of software is zero, zilch, nada – so vendors can often bundle in a little more to get the deal when pushed to the wall. Revisit service levels: Another non-cash sweetener could be an enhanced level of service. Maybe it’s a dedicated project manager to get your migration done. Maybe it’s the Platinum level of support, even if you pay for Bronze. Given the amount of care and feeding required to keep any security management platform tuned and optimized, a deeper service relationship could come in handy. Dealing with your boss’s boss: One last thing: be prepared for your recommendation to be challenged, especially if the incumbent sells a lot of other gear to your company. This entire process has prepared you for that call, so just work through the logic of your decision once more, making clear that your recommendation is best for the organization. But expect the incumbent to go over your head – especially if they sell a lot of storage or servers to your company. Negotiating with the incumbent Customers also need to consider that maybe staying is the best option for their organization, so knowing how to leverage both sides helps you make a better deal. Dealing with an incumbent who doesn’t want to lose business adds a layer of complexity to the decision, so customers need to be prepared for incumbent vendors trying to save the business; fortunately there are ways to leverage that behavior as the decision process comes to a conclusion. It would be naive to not prepare in case the decision goes the other way – due to pricing, politics, or any other reason beyond your control. So if you have to make the status quo work and keep the incumbent, here are some ideas for making lemonade from the proverbial lemon: Tell the incumbent they are losing the deal: We know it is not totally above-board – but all’s fair in love, war, and sales. If the incumbent didn’t already know they were at risk, it can’t hurt to tell them. Some vendors (especially the big ones) don’t care, which is probably one reason you were looking at new stuff anyway. Others will get the wake-up call and try to make you happy. That’s the time to revisit your platform evaluation and figure out what needs fixing. Get services: If you have to make do with what you have, at least force the vendor’s hand to make your systems work better. Asking a vendor for feature enhancement commitments will only add to your disappointment, but there are many options at your disposal. If your issue is not getting proper value from the system, push to have the incumbent provide some professional services to improve the implementation. Maybe send your folks to training. Have their team set up a new set of rules and do knowledge transfer. We have seen organizations literally start over, which may make sense if your initial implementation is sufficiently screwed up. Scale up (at lower prices): If scalability is the issue, confront that directly with the incumbent and request additional hardware and/or licenses to address the issue. Of course this may not be enough but every little bit helps, and if moving to a new platform isn’t an option, at least you can ease the problem a bit. Especially when the incumbent knows you were looking at new gear because of a scaling problem. Add use cases: Another way to get additional value is to request additional modules thrown into a renewal or expansion deal. Maybe add the identity module or look at configuration auditing. Or work with the team to add database

Share:
Read Post

Security Management 2.5: Selection Process

With vendor evaluations in hand, you are ready to make your decision, right? The answer is both yes and no. We know the importance of this decision – you are here because your first attempt at this project wasn’t as successful as it needed to be. After the vendor evaluation process you are in a position to distinguish innovative technologies from pigs with fresh lipstick. But now you need to see which of the vendors is actually the best fit for you! Successful decision-making on SIEM replacement goes beyond vendor evaluation – it entails evaluating yourself too. It is important to differentiate between the two because you cannot make a decision without taking a long hard look at yourself, your team, and your company. This is an area where many projects fail, so let’s break the decision down to ensure you can make a good recommendation and feel comfortable with it – from both internal and external perspectives. But remember the selection of the ‘right’ vendor may come down to more than matching needs against capabilities. The output of our Security Management 2.5 process is not really a decision – it’s more of a recommendation. The final decision will likely be made in the executive suite. That’s why we focused so much on gathering data (quantitative where possible) – you will need to defend your recommendation until the purchase order is signed. And probably afterwards. Defensible Position We won’t mince words. This decision generally isn’t about objective or technical facts – especially since most of you reading this have an incumbent in play, typically part of a big company with important relationships with heavies inside your shop. This could get political, or the decision might be entirely financial, so you need your ducks in a row and a compelling argument for any change. And even then you might not be able to push through a full replacement. In that case the answer might be to supplement. In this scenario you still aggregate information with the existing platform, but then you feed it to the new platform for analysis, reporting, forensics, etc. across the enterprise. Given the economic cost of running both, this is unacceptable for some organizations, but if your hands are tied on replacement, this kind of creative approach is worth considering. But that is still only the external part of the decision process. In many cases the (perceived) failure of your existing SIEM may be self-inflicted. So we also need to evaluate and explain the causes of the failure, with assurance that you can avoid those issues this time. If not your successor will be in the same boat in another 2-3 years. So before you put your neck on the chopping block and advocate for change (if that is what you decide), do some deep internal analysis as well. Looking in the mirror First, let’s make sure you really re-examined the existing platform in terms of the original goals. Did your original goals adequately map your needs at the time, or was there stuff you did not anticipate? How have your goals changed over time? Be honest! Do not let ego get in the way of doing what’s right, and take a hard and fresh look at the decision to ensure you don’t repeat previous mistakes. Did you kick off this process because you were pissed at the original vendor? Or because they got bought and seemed to forget about the platform? Do you know what it will take to get the incumbent where it needs to be – and whether that is even possible? Is it about throwing professional services at the issues? Is there a fundamental technology problem? Did you assess the issues critically the first time around? If it was a skills issue, have you successfully addressed it? Can your folks build and maintain the platform moving forward? Are you looking at a managed service to take that concern off the table? If it was a resource problem, do you now have enough staff for proper care and feeding? Yes, the new generation of platforms requires less expertise to keep operational, but don’t be naive – no matter what any sales rep says, you cannot simply set and forget them. Whatever you pick will require expertise to deploy, manage, tune, and analyze reports. These platforms are not self-aware – by a long shot. Remember, there are no right or wrong answers here, but the truth (and your commitment) will become clear when you need to sell something to management. Some of you may worry that management will see the need for replacement as “your fault” for choosing the incumbent, so make sure you have answers to these questions and that you aren’t falling into a self-delusional trap. You need your story straight and your motivations clear. Have a straightforward and honest assessment of what is going right and wrong, so you are not caught off guard when asked to justify changes and new expenses. Setting Expectations Revisiting requirements provides insight into what you need the security management platform to do. Remember, not everything is Priority #1, so pick your top three must-have items and prioritize the requirements. You can prioritize specific use cases (compliance, security, forensics, operations), and have a pretty good feeling about whether the new platform or incumbent will meet your expectations. If you love some new features of the challenger(s), will your organization leverage them? Firing off alerts faster won’t help if your team takes a week to investigate each issue, or cannot keep up with the increased demand. The new platform’s ability to look at application and database traffic doesn’t matter if your developers won’t help you understand normal behavior to build the rule set. Fancy network flow analysis can be a productivity sink if your DNS and directory infrastructure is a mess and you can’t reliably map an IP to user ID. Does your existing product have too many features? Yes, some organizations simply cannot take advantage of (or

Share:
Read Post

Security Management 2.5: The Decision Process

By this point you appreciate the difference large gap between what you need and what you have, so it’s time to dip your toes in the water to see what other platform vendors offer. But how? You need to figure out which vendors are worth investigating for their advantages, despite any disadvantages. Much of defining evaluation criteria and potential candidates involves wading objectively through vendor hyperbole to see what each offering actually does vs. drug-induced optimism in the vendor’s marketing department. As technology markets mature (and SIEM is pretty mature), the base capabilities of the platforms converge, making them all look alike. Complicating the issue, vendors adopt similar messaging regardless of actual features, making it increasingly difficult to differentiate between the platforms. But you still need to do it, because given your unhappiness with your current platform (or you wouldn’t be reading this, right?), you need to distill what a platform does and doesn’t do, as early in the process as you can. And make no mistake – there are significant differences! We divide the vendor evaluation process into two phases. First we will help you define a short list of potential replacements. Maybe you use a formal Request for Proposals or Information (RFP/RFI) to cull the 15 companies left in the space down to 3-5, or perhaps you don’t. You will see soon enough why you can’t run 10 vendors through even the first stage of the evaluation, so you need a way to narrow down the field to get started. At the conclusion of the short list exercise you will need to test one or two new platforms during a proof of concept (PoC), which we will detail. We don’t recommend skipping directly to the PoC, by the way. Each platform has strengths and weaknesses, and just landing in the upper-right quadrant of a magical chart doesn’t make a vendor the right choice for you. And the RFP process usually unearths items you had not considered, so the process is valuable for its own sake. It is time to do your homework. All of it. Even if you don’t feel like it. The Short List The goal at this point is to whittle the list down to 3-5 vendors who appear to meet your needs, based upon the results of the RFIs or RFPs you sent vendors. Their answers should quickly disqualify a few who lack critical capabilities. The next step, for the remaining vendors, is to get a little better sense of their products and firms. Your main tool at this stage is what we call the dog and pony show. The vendor brings in their sales folks and sales engineers (SEs) to tell you how their product is awesome and will solve every problem you have. Of course they won’t be ready (unless they read this paper as well) for the intensity of your KGB-style interrogation techniques. You know what is important to you, and you need confidence that any vendor passing through this gauntlet to the PoC can meet your requirements. Let’s talk a bit about tactics for getting the answers you need, based on deficiencies in your existing product (from your platform evaluation). You need detailed answers at these meetings to objectively evaluate any new platform. You don’t want a 30-slide PowerPoint walkthrough and a generic demo. Make sure the challenger understands those expectations ahead of the meeting so they have the right folks in the room. If they bring the wrong people cross them off. It’s as simple as that – it’s not like you have a lot of time to waste, right? We recommend defining a set of use cases/scenarios for the vendor to walk you through. Then their skilled folks with expertise using the tool can show you how they would solve the problem you mapped out. This forces them to think about your problems rather than their scripted demo and shows off the tool’s relevant capabilities, instead of the smoothness of the folks staging the demo. You don’t want to buy from the best presenter – you want to identify the product that best meets your needs, and that means making the vendor do what you need – not what shows off their product best. Here are a few scenarios to help guide you on how to set up these meetings. Prioritize this list based on your own needs, but this should get you 90% of the way through narrowing down the list. Security: The first scenario should focus on security. That’s what this ultimately boils down to, right? You want to understand how they would detect an attack based on their information sources, as well as how they configure rule sets and alerts. Make it detailed but not ridiculous. Basically, simplify your existing environment a bit and run them through an attack scenario you saw recently. This will be a good exercise for seeing how the data they collect solves a major use case, detecting an emerging attack quickly. Have the SE walk you through setting up and customizing a rule because you will often need to do both. Use your own scenario to reduce the likelihood of the SE having a pre-built rule. You want to really understand how the rules work because you will spend a lot of time configuring your rules, so it’s useful to see how easy it is for an expert create new policies. Compliance: Next you need to understand how much automation is available for compliance. Ask the SE to show you the process of preparing for an audit. And no, showing you a list of 2,000 reports, most called “PCI X.X”, is not sufficient. Nor is a drop-down list of 200 checkboxes with obscure names going to help you. Ask them to produce samples for a handful of critical reports you rely upon to see how closely they hit the mark – you can see the difference between reports developed by an engineer and those created by an auditor. You need to

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.