Securosis

Research

Recently on the Heavy Feed

Since we post most of the content for our blog series on the Heavy Feed (get it via the web or RSS), every so often we like to post links to our latest missives on the main feed. Within the next 10 days we’ll be wrapping both our Fact-based Network Security and Security Management 2.0 series. As always, we love feedback, discussion, dissension and the occasional troll to add comments, so fire away. We look forward to your participation. Fact-based Network Security Metrics and the Pursuit of Prioritization Defining ‘Risk’ Outcomes and Operational Data Operationalizing the Facts Compliance Benefits Security Management 2.0: Is it time to replace your SIEM? Time to Replace Your SIEM? (new series) Platform Evolution Revisiting Requirements Platform Evaluation, Part 1 Platform Evaluation, Part 2 Vendor Evaluation – Culling the Short List Vendor Evaluation – Driving the PoC Share:

Share:
Read Post

Security Management 2.0: Making the Decision

It’s time – you are ready. You have done the work, including revisiting your requirements, evaluating your current platform in terms of your current and emerging requirements, assessing new vendors/platforms to develop a short list and run a comprehensive proof of concept. Now it’s time to make the call. We know this is an important decision – we are here because your first attempt at this project wasn’t as successful as it needed to be. So let’s break down the decision to ensure you can make a good recommendation and feel comfortable with it. That’s actually a good point to discuss. The output of our Security Management 2.0 process is not really a decision – it’s more of a recommendation. That’s the reality – the final decision will likely be made in the executive suite. That’s why we have focused so much on gathering data (quantitative where possible) – you will need to defend your recommendation until the purchase order is signed. And probably more afterwards. We won’t mince words. This decision generally isn’t about the facts – especially since there is an incumbent in play, which is likely part of a big company that may have important relationships with heavies in your shop. So you need your ducks in a row and a compelling argument for any change. But that’s still only part of the decision process. In many cases, the (perceived) failure of your existing SIEM is self-inflicted. So we also need to evaluate and explain the causes of the failed project, with assurance that they will be addressed and avoided this time. If not, your successor will be in the same boat in another 2-3 years. So before you put your neck on the chopping block and advocate for a change (if that’s what you decide), do some deep internal analysis as well. Introspection The first thing is to make sure you really re-examined the existing platform in terms of the original goals. Did your original goals adequately map your needs at the time, or was there stuff you did not expect? How have your goals changed over time? Be honest! This is not the time to let your ego get in the way of doing what’s right, and you need a hard and fresh look at the decision to ensure you don’t repeat previous mistakes. Did you kick off this process because you were pissed at the original vendor? Or because they got bought and seemed to forget about the platform? Do you know what it will take to get the incumbent to where it needs to be – or whether that is even possible? Is it about throwing professional services at the issues? Is there a fundamental technology problem? Remember, there are no right or wrong answers here, but the truth will become clear when you need to sell this to management. Some of you may be worried that management will look at the need for replacement as ‘your fault’ for choosing the incumbent, so make sure you have answers to these questions and that you aren’t falling into a self-delusion trap. You need your story straight and your motivations clear. Did you assess the issues critically the first time around? If it was a skills issue, have you addressed it? Can your folks build and maintain the platform moving forward? Or are you looking at a managed service to take that concern off the table? If it was a resource problem, do you now have enough staff for proper care and feeding? Yes, the new generation of platforms requires less expertise to keep operational, but don’t be naive – no matter what any sales rep says, you cannot simply set and forget them. Whatever you pick will require expertise to deploy, manage, tune, and analyze reports. These platforms are not self-aware – not by a long shot. The last thing you want to do is set yourself up for failure, so make sure you ask the right questions ahead of time and be honest about the answers. Expectations The next main aspect of the decision is reconciling your expectations with reality. Revisiting requirements provides information on what you need the security management platform to do. You should be able to prioritize the specific use cases (compliance, security, forensics, operations), and have a pretty good feeling about whether the new platform or incumbent will be able to meet your expectations. Remember, not everything is Priority #1, so pick your top three must-have items, and prioritize the requirements. If you are enamored with some new features of the challenger(s), will your organization be able to leverage them? Firing off alerts faster may not be helpful if your team takes a week to investigate any issues, or cannot keep up with the increased demand. The new platform’s ability to look at application and database traffic doesn’t matter if the developers won’t help you understand normal behavior to build the rule set. Fancy network flow analysis can be a productivity sink if your DNS and directory infrastructure is a mess and you can’t reliably map IP to user ID. Or does your existing product have too many features? Yes, it does happen that some organizations simply cannot take advantage of (or even handle) complex multi-variate correlation across the enterprise. Do you need to aggregate logs because organizational politics, or your team’s resources or skill set, prevent you from getting the job done? This might be a good reason to outsource or use a managed service. There isn’t a right or a wrong answer here, only the answer. And not being honest about that answer will land you in the hotseat again. If you kickstarted this effort because the existing product missed something and it resulted in a breach, can you honestly say the new thing would (not ‘might’) detect that attack? We have certainly seen high profile breaches result in tossing the old and bringing in the new (someone has to pay, after all), but make sure you

Share:
Read Post

Security Management 2.0: Vendor Evaluation – Driving the PoC

As we discussed in the last post, when considering new security management platforms, it’s critical to cull your short list based on your requirements, and to then move into the next step of the evaluation process – the Proof of Concept (PoC). Our PoC process is somewhat controversial – mostly because vendors hate it. Why? Because it’s about you and your needs, not them and their product. But you are the buyer, right? Always remember that. Most SIEM vendors want to push you through a 3-5 day eval of their technology on their terms, with their guy driving. You already have a product in place so you know the drill. You defined a few use cases important to you, and then the vendor (and their SE) stood the product up and ran through those use cases. They brought in a defined set of activities for each day, and you ended the test with a good idea of how their technology works, right? Actually, wrong. The vendor PoC process is built to highlight their product strengths and hide their weaknesses. We know this from first hand experience – we have built them for vendors in our past roles. Your objective must be to work through your paces, not theirs. To find the warts now – not when you are responding to an incident. It’s wacky that some vendors get scared by a more open PoC process, but their goal is to win the deal, and they put a lot of sweat into scripting their process so it goes smoothly for everyone involved. We hate to say it, but smooth sailing is not the point! The vendor will always say “We can do that!” – it’s your job to find out how well – or how awkwardly. So set up evaluation criteria based on your requirements and use cases. Your criteria don’t need to be complicated. Your requirements should spell out the key capabilities you need, and then plan to further evaluate each challenger based on intangibles such as set-up/configuration, change management, customization, user experience/ease of use, etc. Before you start, have your team assess your current platform as a basis for comparison. As you start the PoC, we recommend you invest in screen capture technology. It’s hard to remember what these tools did and how they did it later – especially after you’ve seen a few of them work through the same procedures. So capture as much video as you can of the user experience – it will come in very handy when you need to make a decision. We’ll discuss that in the next post. Without further ado, let’s jump into the PoC. Stand it up, for reals One of the advantages of testing security management products is that you can actually monitor production systems without worrying about blowing them up, taking them down, or adversely impacting anything. So we recommend you do just that. Plan to pull data from your firewalls, your IDS/IPS systems, and your key servers. Not all devices, of course, but enough to get a feel for how you need to set up the collectors. You will also want to configure a custom data source or two and integrate with your directory store to see how that works. Actually do a configuration and bootstrap the system in your environment. Keep in mind that the PoC is a great time to get some professional services help – gratis. This is part of the sales process for the vendors, so if you want to model out a targeted attack and then enumerate the rules in the system, have the SE teach you how to do it yourself. Then model out another attack and build the rules yourself, without help. The key is to learn how to run the system and to get comfortable – if you do switch you will be living with your choice for a long time. Focus on visualization, your view into the system. Configure some dashboards and see the results. Mess around with the reports a bit. Tighten the thresholds of the alerts. Does the notification system work? Will the alerts be survivable at production levels for years? Is the information useful? These are all things you need to do as part of kicking each challenger’s tires. If compliance is your key requirement use PCI as an example. Start pulling data from your protected network segment. Pump that data through the PCI reporting process. Is the data correct and useful for everybody with an interest? Are the reports comprehensive? Will you need to customize the report for any reason? You need to answer this kind of questions during the PoC. Run a Red Team Run a simulated attack against yourself. We know actually attacking production systems would make you very unpopular with the ops folks, so set up a lab environment. But otherwise, you want this situation to be as realistic as possible. Have attackers breach test systems with attack tools. Have your defenders try to figure out what is going on, as it’s happening. Does the system alert as it should? Will you need to heavily customize the rule set? Can you identify the nature of the attack quickly? Does their super-duper forensic drill-down give you the view you need? The clock is ticking, so how easy is it to use the system to search for clues? Obviously this isn’t a real incident situation, so you’ll take some editorial liberties, and that’s fine. You want a feel for how the system performs in near-real-time. If an attacker is in your systems, will you find them? In time to stop or catch them? Once you know they are there, can you tell what they are doing? A Red Team PoC will help you determine that. Do a Post-Mortem Once you are done with the Red Team exercise, you should have a bunch of data that will make for a nice forensic investigation of what the attack team did, and perhaps what the defense team

Share:
Read Post

Incite 9/7/2011: Decisions, Decisions

Making decisions is very hard for most people. Not for me. The Boss and I constantly discuss a single issue over and over again as she debates all aspects of a big decision. I try to be patient, but patience is, uh, not my forte. I know it’s her process and to rush that usually lands me a spot in the doghouse, but it’s still hard to understand. Decisions are easy for me. I do the work, look at the upside and downside, and make the call. Next. I don’t look back either. When I make a decision, I’m pretty confident it’s the right thing to do at that point in time. That’s the key. Any decision any of us make at any time is presumably the best decision right then. 10 minutes or 10 years from now things will have changed. Things always change. The question is how much. Sometimes you’ll find your decisions are wrong. Actually, often your decisions are wrong. Yeah, it’s that human thing. I’ve been known to weigh intuition higher than data in some decisions. Especially relative to my career choices. If it felt right, whatever that means, I would go for it. And I’ve been wrong in those choices, a lot. But I guess I come from the school that says it’s better to do stuff and screw up, than to not do anything – stuck in a cycle of analysis paralysis. I’m sure I’ll have regrets at some point, but it won’t be because I couldn’t make a decision. It’s worth mentioning that I’m not opposed to revisiting a decision, but only if something has changed that affects my underlying assumptions. Lots of folks stew over a decision, poring over the same data over and over again, in an endless cycle of angst and second guessing. If the data doesn’t change, neither should the decision. But these folks figure that if they question themselves constantly for long enough, the decision will become easy. But often, they never achieve peace of mind. Gosh, that has to be hard. I pay a lot more attention to the downside of any decision. In most cases, the worst case scenario is you upset someone or waste time and/or money. Obviously I want to avoid those outcomes where possible, but those are manageable downsides for me. So I don’t obsess over decisions. I make the decision and I move on. Second guessing isn’t productive. Part of life is taking risks and adapting as needed. And cleaning up the inevitable mess when you are wrong. I’m okay with that. -Mike Photo credit: “Lose your sleep before your decision, not after it” originally uploaded by Scott McLeod Incite 4 U Liar, liar, pants on fire: Any time I catch my kids telling me less than the truth, I break into the “Liar, liar” refrain over and over again. Yes, I look stupid, but they hate it even more, so it’s worth doing. One of the (former) Anonymous folks pretty much pinpoints the fundamental skill set of social engineering – lying. Okay, there is grey around lies, but ultimately that’s what it is. Does that make the ability to defend against lies any less important? Of course not. Nor am I judging folks who practice social engineering daily and professionally. But if it walks and quacks like a duck, you might as well call it a duck. – MR Misplaced confidence: There will be a lot written over the next weeks and months over the hack of the Certificate Authority DigiNotar, including a post I’m working on. But if you want to quickly learn a key lesson, check out these highlights from the investigation report – thanks to Ira Victor and the SANS forensics blog. No logging. Flat network. Unpatched Internet-facing systems. Total security fundamentals FAIL. Even better, they kept the breach hidden for a month. The breach probably happened many months earlier than their claimed date. Keep in mind this was a security infrastructure company. You know, the folks who are supposed to be providing a measure of trust on the Internet, and helping others secure themselves. Talk about making most of the mistakes in the book! And BTW – as I’ve said before I know for a fact other security companies have been breached in recent years and failed to disclose. How’s that for boosting consumer confidence? – RM They stole what?: When it come to breach notification laws, California has been at the forefront for more that a decade. Now California has updated its breach disclosure laws in order to disclose additional incident data. Most firms adhering to breach notification laws include so little information that the recipients of a breach notification have no clue what it means to them, nor what steps they need to take in order to protect themselves. Credit monitoring services are more of a red herring – and occasionally a devious revenue opportunity for breached companies to offset notification costs. So California Senate Bill 24 (SB-24) requires companies to include additional information on what happened, and explicitly state what type of data was leaked. Will it help? As usual, it depends on what the company decides to put in the letter, but I don’t have high hopes. Will security vendors be pitching monitoring software to aid companies in identifying what was stolen? Absolutely, but many firms’ legal teams will not be eager to have that data hanging around because it’s often a smoking gun, and they will choose ignorance over security to reduce liability. As they always do. – AL Ethics, hypocrisy, and certifications: You have to hand it to Jericho, one of the drivers of attrition.org. He puts the time in to build somewhat airtight cases, usually turning folks’ words against them in interesting ways. I wouldn’t want to take him on in a debate, that’s for sure. His recent post at Infosec Island, clearly pointing out the hypocrisy of the CISSP folks, is a hoot. As usual, you can find all

Share:
Read Post

Security Management 2.0: Vendor Evaluation—Culling the Short List

So far we have discussed a bit of how security management platforms have evolved, how your requirements have changed since you first deployed the platform, and how you need to evaluate your current platform (Part 1, Part 2) in light of both. Now it’s time to get into the meat of the decision process by defining your selection criteria for your Security Management 2.0 platform. Much of defining your evaluation criteria is wading objectively through vendor hyperbole. As technology markets mature (and SIEM is pretty mature), the capabilities of each offering tend to get pretty close. The messaging is very similar and it’s increasingly hard to differentiate one platform from another. Given your unhappiness with your current platform (or you wouldn’t be reading this, right?), it’s important to distill down what a platform does and what it doesn’t, as early in the process as you can. We will look at the vendor evaluation process in two phases. In this post, we’ll help you define a short list of potential replacements. Maybe you use a formal RFP/RFI to cull the 25 companies in the space to 3-5, maybe you don’t. You’ll see soon enough why you can’t run 10 vendors through even the first stage of this process. At the conclusion of the short list exercise, you’ll need to test one or two new platforms during a Proof of Concept, which we’ll detail in the next post. We don’t recommend you skip directly to the test, by the way. Each platform has strengths and weaknesses and just because a vendor happens to be in the right portion of a magical chart doesn’t mean it’s the right choice for you. Do your homework. All of it. Even if you don’t feel like it. Defining the Short List A few aspects of the selection criteria should be evaluated with a broader group of challengers. Think 3-5 at this point. You need to prioritize each of these areas based on your requirements. That’s why you spent so much time earlier defining and gaining consensus on what’s important for replacing your platform. Your main tool in this stage of the process is what we kindly call the dog and pony show. That’s when the vendor brings in their sales folks and sales engineers (SEs) to tell you how their product is awesome and will solve every problem you have. Of course, what they won’t be ready for (unless they read this post as well) is the ‘intensity’ of your KGB-style interrogation techniques. Basically, you know what’s important to you and you need confidence that any vendor passing through this gauntlet (and moving on to the PoC) will be able to meet your requirements. Let’s talk a bit about tactics to get the answers you need, based on the areas where your existing product is lacking (from the platform evaluation). You need to detailed answers during these meetings. This meeting is not a 30 slide PowerPoint and a generic demo. Make sure the challenger understands those expectations ahead of the meeting, so they have right folks in the room. If they bring the wrong people, cross them off the short list. It’s as simple as that – it’s not like you have a lot of time to waste, right? Security: We recommend you put together a scenario as a case study for each challenger. You want to understand how they’d detect an attack based on the information sources they gather and how they configure their rule sets and alerts. Make it detailed, but not totally ridiculous. So basically, dumb down your existing environment a bit and run them through an attack scenario you’ve seen recently. This will be a good exercise for seeing how the data they collect is used to solve a major security management platform major use case, detecting an emerging attack quickly. Have the SE walk you through setting up or customizing a rule. Use your own scenario to reduce the likelihood of the SE having a pre-built rule. You want to really understand how the rules work, because you will spend a lot of time configuring your rules. Compliance: Next, you need to understand what level of automation exists for compliance purposes. Ask the SE to show you the process of preparing for an audit. And no, showing you a list of 2,000 reports, most called PCI X.X is not sufficient. Ask them to produce samples for a handful of critical reports you rely upon to see how closely they hit the mark – you can see the difference between reports developed by an engineer and those created by an auditor. You need to understand where the data is coming from, and hopefully they will have a demo data set to show you a populated report. The last thing you want to learn is that their reports don’t pull from the right data sources two days before an audit. Integration: In this part of the discussion delve into how the product integrates with your existing IT stack. How does the platform pull data from your identity management system? CMDB? What about data collection? Are the connectors pre-built and maintained by the vendor? What about custom connectors? Is there a SDK available, or does it require a bunch of professional services? Forensics: Vendors throw around the term root cause analysis frequently, while rarely substantiating how their tool is used to work through an incident. Have the SE literally walk you through an investigation based on their sample data set. Yes, you’ll test this yourself later, but get a feel for what tools they have built in and how they can be used by the SE who should really know how to use the system. Scalability: If your biggest issue is a requirement for more power, then you’ll want to know (at a very granular level) how each challenger solves the problem. Dive into their data model and their deployment architectures, and have them tell stories about their biggest implementations. If scalability is a

Share:
Read Post

The New Path of Least Resistance

It’s hard to believe it has been 10 years since the 9/11 terrorist attacks on the US. I remember that day like it was yesterday. I actually flew into the Boston airport that morning. In hindsight, those attacks opened our eyes to a previously overlooked attack vector – using a passenger jet as a missile. The folks running national security for the US had all sorts of scenarios for how we could be attacked on our own soil, but I’m not sure that vector was on their lists. It seems we security folks have to start thinking in a similarly orthogonal pattern. Since we started hearing some details of the EMC/RSA breach, and of the attacks on the Comodo and DigiNotar CAs, it has become clear the attackers have been re-thinking their paths of least resistance. Let me back up a bit. Attackers will follow the path of least resistance to their intended target – they always have. Over the past few years, the path of least resistance has clearly involved exploiting both application and user weakness, rather than breaking technical security measures in network infrastructure. Why break down a door if the nincompoop on the other side will just let you in, and key Internet-facing apps don’t even have locks? That’s what we are seeing in practice. If an attacker is trying to breach a soft target, the user and application attack vectors remain the path of least resistance for the foreseeable future. The skills gap between the ends is pretty ugly, and not getting better. That’s why we spend so much time focusing on Reacting Faster and Better – it’s pretty much the only way to survive in an age of inevitable compromise. But what if the target is not soft? By that I mean a well-fortified environment, without the typical user and/or application holes we typically see exploited. A well-segmented and heavily-monitored infrastructure without the standard attack vectors. For example, one of the big defense contractors, who protect the national secrets of the defense/industrial base. Breaking down the doors here is very hard, and in many cases not worth the effort. So the attackers have identified a new low-resistance path – the security infrastructure protecting those hard targets. It was very clear with the RSA attack. That was all about gaining access to the token seeds and using them to compromise the real targets: US defense contractors. Even if RSA was as well-protected as a defense contractor, breaking into RSA once provided a leg up on all the defense contractors using RSA tokens. It’s not as clear with the Comodo or DigiNotar attacks. Those seem to be more politically motivated, but still represent an interesting redefinition of the man in the middle attack: compromising the certificate trust chain that identifies legitimate websites. So what? What impact does this have on day to day operations? Frankly, not much – so many of us are so far behind on basic attempts to block and tackle on the stuff we already know about. But for those hard targets out there, it’s time to expand your threat models to look at the technology that enforces your security controls. I remember attending a Black Hat session a few years back by Tom Ptacek of Matasano, where he discussed his research into compromising pretty well known IT management technology. That’s the kind of analysis we need looking forward. Push vendors to provide information about how they attack their own products and what they find. But don’t expect much. Vendors do not, as a rule, proactively try to poke holes in their own stuff. And if they do, they don’t would admit weakness by admitting it. So be prepared to do (and fund) much of this work yourself. But that’s beside the point. It’s time to start thinking that the new path of least resistance may be your security technology. It’s a challenge to the folks that build security products, as well as to those of you who protect hard targets. Who will rise to this challenge? Photo credit: “Path of Least Resistance” originally uploaded by Billtacular Share:

Share:
Read Post

Making Bets

Being knee deep in a bunch of research projects doesn’t give me enough time to comment on the variety of interesting posts I see each week. Of course we try to highlight them both in the Incite (with some commentary) and in the Friday Summary. But some posts deserve a better, more detailed treatment. We haven’t done an analysis, but I’d guess we find a pretty high percentage of what Richard Bejtlich writes interesting. Here’s a little hint: it’s because he’s a big brained dude. Early this week he posted a Security Effectiveness Model to document some of his ideas on threat-centric vs. vulnerability-centric security. I’d post the chart here but without Richard’s explanations it wouldn’t make much sense. So check out the post. I’ll wait. When I took a step back, Richard’s labels didn’t mean much to me. But there is an important realization in that Venn diagram. Richard presents a taxonomy to understand the impact of the bets we make every day. No, I’m not talking about heading off to Vegas on a bender that leaves you… well, I digress. But the reality is that security people make bets every day. Lots of them. We bet on what’s interesting to the attackers. We bet on what defenses will protect those interesting assets. We bet on how stupid our employees are (they remain the weakest link). We also bet on how little we can do to make the auditors go away, since they don’t understand what we are trying to do anyway. And you thought security was fundamentally different than trading on Wall Street? Here’s the deal. A lot of those bets are wrong, and Richard’s chart shows why. With limited resources we have to make difficult choices. So we start by guessing what will be interesting to attackers (Richard’s Defensive Plan). Then we try to protect those things (Live Defenses). Ultimately we won’t know everything that’s interesting to attackers (Threat Actions). We do know we can’t protect everything, so some of the stuff we think is important will go unprotected. Oh well. Even better, we won’t be right on what we assume the attackers want, nor on what defenses will work. Not entirely. So some of the stuff we think is important isn’t. So of our defenses protect things that aren’t important. As in advertising, a portion of our security spend is wasted – we just don’t know which portion. Oh well. We’ll also miss some of the things the attacker thinks are important. That makes it pretty easy for them, eh? Oh, well. And what about when we are right? When we think something will be a target, and the attackers actually want it? And we have it defended? Well, we can still lose – a persistent attacker will still get its way, regardless of what we do. Isn’t this fun? But the reason I so closely agree with most of what Richard writes is pretty simple. We realize the ultimate end result, which he summed up pretty crisply on Twitter (there are some benefits to a 140 character limit): “Managing risk,” “keeping the bad guys out,” “preventing compromise,” are all failed concepts. How fast can you detect and correct failures? and http://twitter.com/taosecurity/status/108527362597060608: The success of a security program then ultimately rests w/ the ability to detect & respond to failures as quickly & efficiently as possible. React Faster and Better anyone? Share:

Share:
Read Post

Incite 8/31/2011: The Glamorous Life

It was a Sunday like too many other Sundays. Get up, take the kids to Sunday school, grab lunch with friends, then take the kids to the pool. Head home, shower up, and then kiss the Boss and kids goodbye and head off to the airport. Again. Another week, another business trip. It’s a glamorous life. I pass through security and suffer the indignity of having some (pleasant enough) guy grope me because I won’t pass through an X-ray machine because the asshats at TSA don’t understand the radiation impact. Maybe it makes other folks feel safe, but it’s just annoying to people aware of how ridiculous airport security theater really is. Man, how glamorous is that experience? When I arrive at my destination (at 1am ET), I get on a tram with all the other East Coast drones and wait in a line to get my rental car. The pleasant 24-year-old trying to climb the corporate ladder by dealing with grumps like me reminds me why I shouldn’t depend on my AmEx premium rental car insurance. I not-so-politely decline. She doesn’t want an explanation of why she is wrong, and I don’t offer it. Glamor, baby, yeah! I get to the hotel, which is comfortable enough. I sleep in a bit (since I’m now on the West Coast), and at 5am realize the hotel is literally right next to mass transit. Every 5 minutes, a train passes by. Awesome. I’m glad my body thinks it’s 8am or I’d probably be a bit upset. And the incredible breakfast buffet is perfect. Lukewarm hard-boiled eggs for protein. And a variety of crap cereals. At least they have a waffle maker. So much for my Primal breakfast. With this much glamor, I’m surprised I don’t see Trump at the buffet. But then my strategy day starts, and now I remember why I do this. We have a great meeting, with candid discussions, intellectual banter, and lots of brainstorming. I like to think we made some progress on my client’s strategic priorities. Or I could be breathing my own exhaust. Either way, it’s all good. I find a great salad bar for dinner and listen to the Giants’ pre-season game on my way back to the hotel. Sirius in the rental car for the win. When I wake up the next morning, it’s different. Thankfully the breakfast buffet isn’t open yet. I head to the airport. Again. It takes me little while to find a gas station to fill up the car. Oh well, it doesn’t matter, I’m going home. I pass through security without a grope, get an upgrade, and settle in. As we take off, I am struck by the beauty of our world. The sun poking through the clouds as we climb. The view of endless clouds that makes it look like we are in a dream. The view of mountains thousands of feet below. Gorgeous. So maybe it’s not a glamorous life, but it is beautiful. And it’s mine. For that I’m grateful. -Mike Photo credits: “Line for security checkpoint at Hartsfield-Jackson Airport in Atlanta” originally uploaded by Rusty Tanton Incite 4 U Painting the Shack gray: If you know Dave Shackleford, it’s actually kind of surprising to see Dave discuss the lack of Black or White in the security world. He’s not your typical shades-of-gray type guy. Dave will go to the wall to defend what he believes, and frequently does. A lot of the time, he’s right. In this post he makes a great point, which I paraphrase as everyone has their own truth. There are very few absolutes in security or life. What is awesome for you may totally suck for me. But what separates highly functioning folks from assholes is the ability to agree to disagree. Unfortunately a lot folks fall in the asshole camp because they can’t appreciate that someone else’s opinion may be right, given their own different circumstances. I guess you need to be wrong fairly frequently (as I have throughout my career) to learn to appreciate the opinions of other folks, even if you think they are wrong. – MR Betting on the wrong cryptohorse: I will be the first to admit that I never went to business school, although I did manage IT at one. So I probably missed all those important MBA lessons like how to properly teamify or synergistically integrate holistic accounting process management. Instead I stick to simple rules like, “Don’t make it hard for people to give you money,” and “Don’t build a business that completely relies on another company that might change its mind.” For example, there are a few companies building out encryption solutions that are mostly focused on protecting data going into Salesforce.com. Seems like the sort of thing Salesforce themselves might want to offer someday, especially since data protection is one of the bigger inhibitors of their enterprise customer acquisition process. So we shouldn’t be surprised that they bought Najavo Systems. Great for Navajo, not so much for everyone else. Sure, there are other places they can encrypt, but that was the biggest chunk of the market and it won’t be around much longer. On that note, I need to get back to coding our brand new application. Don’t worry, it only runs on the HP TouchPad – I’m sure that’s a safe bet. – RM Cutting off their oxygen: Brian Krebs’ blog remains a favorite of mine, and his recent posts on Fake AV and Pharma Wars read like old-fashioned gangsters-vs.-police movies. Fake AV is finally being slowed by very traditional law enforcement methods, as Ed Bott pointed out in his analysis of MacDefender trends. Identifying the payment processors and halting payments to the criminal organizations, as well as arresting some of the people directly responsible, actually works. Who knew? The criminals are using fake charities to funnel money to politicians in order to protect their illegal businesses. Imagine that! We know defenses and education to help secure the general public

Share:
Read Post

Fact-Based Network Security: Compliance Benefits

As we discussed in the last post, beyond the operational value of fact-based network security, compliance efforts can benefit greatly from gathering data, and being able to visualize and report on it. Why? Because compliance is all about substantiating your control set to meet the spirit of whatever regulatory hierarchy you need to achieve. Let’s run through a simple example. During a PCI assessment, the trusty assessor shows up with his/her chart of requirements. Requirement 1 reads “Install and maintain a firewall configuration to protect cardholder data.” So you have two choices at this point. The first is to tell auditor that you have this, and hope they believe you. Yeah, probably not a recipe for success. Or, you could consult your network security fact-base and pull a report on network topology, which shows your critical data stores (based on assessments of their relative value), the firewalls in place to protect them, and the flow of traffic through the network to get to the critical assets/business systems. Next the auditor needs to understand the configuration of the devices to make sure unauthorized protocols are not allowed through the firewalls to expose cardholder data. Luckily, the management system also captures firewall configurations on an ongoing basis. So you have current data on how the device is configured, and can show the protocols in question are blocked. You can also explicitly show what IP addresses and/or devices can traverse the device, using which protocols or applications (in the case of a new, fancy application-aware firewall). You close out this requirement by showing some of the event logs from the device, which demonstrate what was blocked by the firewall and why. The auditor may actually smile at this point, will likely check the box in the chart, and should move on to the next requirement. Prior to implementing your fact-based network security process, you spent a few days updating the topology maps (damn Visio), massaging the configuration files to highlight the relevant configuration entries (using a high-tech highlighter) and finally going through a zillion log events to find a few examples to prove the policies are operational. Your tool doesn’t make audit prep as easy as pressing a button, but it’s a lot closer than working without tools. Going where the money is To be clear, compliance is a necessary evil in today’s security world. Many of the projects we need to undertake have at least tangential compliance impact. Given the direct cost of failing an audit, potentially having to disclose an issue to customers and/or shareholders and applicable fines, most large organizations have a pot of money to make the compliance issue go away. Smart security folks still think about Security First! Which means you continue to focus on implementing the right controls to protect the information that matters to you. But success still hinges on your ability to show how the project can impact compliance, either by addressing audit deficiencies or making the compliance process more efficient, thus saving money. It’s probably not a bad idea to keep time records detailing how long it takes your organization to prepare for a specific audit, without some level of automation. The numbers will likely be pretty shocking. In many cases, the real costs of time and perhaps resources will pay for the tools to implement a fact-based network security process. As we wrap up our blog series in the next post, we’ll take this from theory to practice, running through a scenario to show how this kind of approach would impact your operational security. Share:

Share:
Read Post

Fact-Based Network Security: Operationalizing the Facts

In the last post, we talked about outcomes important to the business, and what types of security metrics can help make decisions to achieve those outcomes. Most organizations do pretty well with the initial gathering of this data. You know, when the reports are new and the pie charts are shiny. Then the reality – of the amount of work and commitment required to implement a consistent measurement and metrics process – sets in. Which is when most organizations lose interest and the metrics program falls by the wayside. Of course, if the there is a clear and tangible connection between gathering data and doing your job better, you make the commitment and stick with it. So it’s critical, especially within the early phases of a fact-based network security process, to get a quick win and capitalize on that momentum to cement the organization’s commitment to this model. We’ll discuss that aspect later in the series. But consistency is only one part of implementing this fact-based network security process. In order to get a quick win and warrant ongoing commitment, you need to make sense of the data. This issue has plagued technologies such as SIEM and Log Management for years – having data does not mean you have useful and valuable information. We want to base decisions on facts, not faith. In order to do that, you need to make gathering security metrics an ongoing and repeatable process, and ensure you can interpret the data efficiently. The keys to these are automation and visualization. Automating Data Collection Now that you know what kind of data you are looking for, can you collect it? In most cases the answer is yes. From that epiphany, the focus turns to systematically collecting the types of data we discussed in the last post. Data sources like device configuration, vulnerability, change information, and network traffic can be collected systematically in a leveraged fashion. There is usually a question of how deeply to collect data, whether you need to climb the proverbial stack in order to gather application and database events/logs/transactions, etc. In general, we Securosis folk advocate collecting more rather than less data. Not all of it may be useful now (or ever). But once you miss the opportunity to capture data you don’t get it back. It’s gone. And of course which data sources to leverage depends on the problems you are trying to solve. Remember, data does not equal information, and as much as we’d like to push you to capture everything, we know it’s not feasible. So balance data breadth and fidelity against cost and storage realities. Only you can decide how much data is enough to answer the questions of prioritizing activities. We tend to see most organizations focus on network, security, and server logs/events – at least initially. Mostly because that information is plentiful and largely useful in pinpointing attacks and substantiating controls. It’s beyond the scope of this paper to discuss the specifics of different platforms for collecting and analyzing this data, but you should already know the answer is not Excel. There is just too much data to collect and parse. So at minimum you need to look for some kind of platform to automate this process. Visualizaton Next we come come up against that seemingly intractable issue of making sense of the data you’ve collected. In this case, we see (almost every day) that a picture really is worth thousands of words (or a stream of thousands of log events). In practice, pinpointing anomalies and other suspicious areas which demand attention, is much easier visually – so focusing on dashboards, charts, and reports become a key part of operationalizing metrics. Right, those cool graphics available in most security management tools are more than eye candy. Who knew? So which dashboards do you need? How many? What should they look like? Of course it depends on which questions you are trying to answer. At the end of this series we will walk through a scenario to describe (at a high level, of course) the types of visualizations that become critical to detecting an issue, isolating its root cause, and figuring out how to remediate it. But regardless of how you choose to visualize the data you collect, you need a process of constant iteration and improvement. It’s that commitment thing again. In a dynamic world, things constantly change. That means your alerting thresholds, dashboards, and other decision-making tools must evolve accordingly. Don’t say we didn’t warn you. Making Decisions As we continue through our fact-based network security process, you now have a visual mechanism for pinpointing potential issues. But if your environment is like others we have seen, you’ll have all sorts of options for what you can do. We come full circle, back to defining what is important to your organization. Some tools have the ability to track asset value, and show visuals based on the values. Understand that value in this context is basically a totally subjective guess as to what something is worth. Someone could arbitrarily decide that a print server is as important as your general ledger system. Maybe it is, but this gets back to the concept of “relative value” earlier in the series. This relative understanding of an asset/business system’s value yields a key answer for how you should prioritize your activities. If the visualization shows something of significant value at risk, then fix it. Really. We know that sounds just too simple, and may even be so obvious it’s insulting. We mean no offense, but most organizations have no idea what is important to them. They collect very little data and thus have little understanding of what is really exposed or potentially under attack. So they have no choice but to fly blind and address whatever issue is next on the list, over and over again. As we have discussed, that doesn’t work out very well, so we need a commitment to collecting and then visualizing data, in order to

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.