Securosis

Research

Infrastructure Security Research Agenda 2011—Part 4: Egress and Endpoints

In the first three posts of my 2011 Research Agenda (Positivity, Posturing and RFAB, Vaulting and Assurance) I mostly talked about how we security folks need to protect our stuff from them. You know, outside attackers trying to reach our stuff. Now let’s move on to people on the inside. Although most of us prefer to focus on folks trying to break in, it’s also important to put some forethought into protecting people inside the perimeter. Whether an employee loses a device (and compromises data), clicks the wrong link (resulting in a compromised device and giving attackers a foothold on the internal network), or even maliciously tries to exfiltrate data (WikiLeaks, anyone?) all of these attack scenarios are very real. So we have to think from the inside out about protecting endpoint devices, because nowadays that is probably the most common way for attackers to begin a multi-faceted attack. They’ll pwn an endpoint and then use it to pivot and find other interesting stuff. Yet, we also have to focus a bit on breaking one of the legs of Rich’s Data Breach Triangle – the egress leg. Unless the attackers can get the data out, it’s not a breach. So a lot of what we’ll do as part of the egress research agenda is focus on content filtering at the edge to ensure our sensitive stuff doesn’t escape. Endpoints The good news is that we did a bunch of research to lay the foundation for endpoint security in 2010. Looking at 2011, we want to dig deeper and start thinking about dealing with all of these newfangled devices like smartphones, and examine technologies like application white listing which implements our positivity model on endpoint devices. Background: Endpoint Security Fundamentals Endpoint Protection Suite Evolution: Using the Endpoint Fundamentals content as a base; we need to delve into what the EPP suite looks like moving forward; and how capabilities like threat intelligence, HIPS, and cloud services will remake what we think of as the endpoint suite. Application White Listing: Where, When, and Why? We’ve written a bit about application white listing concepts, but it’s still not necessarily a general purpose control – yet. So we’ll dig into specific use cases where white listing makes sense and some deployment advice to make sure your implementation is successful (and avoid breaking too much). Mobile device security: There is a lot of hype but not much by way of demonstrable weaponized threats to our smartphones, so we’ll document what you need to know and what to ignore, and discuss some options for protecting mobile devices. Quick Wins with Full Disk Encryption: Everyone is buying FDE, but how do you choose it and how do you get quick value? Again, lots of stuff to think about for protecting endpoints, so we’ll be pretty busy on these topics in 2011. Egress Egress filtering on the network will be covered by the Positivity research. But as Adrian mentions in his research agenda, there is plenty of content that goes out of your organization via email and web protocols, and we need to filter that traffic (before you have a breach). Understanding and Selecting DLP, v2: Rich’s recent updated to this paper is a great base, and we may dig into specific endpoint or gateway DLP to prevent critical content from leaving the organization – which plays directly into this egress theme. Web Security Evolution: Web filters and their successors have been around for years, so what is the future of the category and how can/should customers with existing web security implementations move forward? And how will SaaS impact how customers provide these services? Email Security Evolution: Very similar conceptually to web security evolution, but of course the specifics are very different. So there you have it. Yes, I’ll be pretty busy next year and that’s a good thing. I’m still looking for feedback on these ideas, so if one (or more) of these research projects resonates please let me know. Or if some things don’t, that would be interesting as well. Share:

Share:
Read Post

React Faster and Better: Incident Response Gaps

In our introduction to this series we mentioned that the current practice of incident response isn’t up to dealing with the compromises and penetrations we see today. It isn’t that the incident response process itself is broken, but how companies implement response is the problem. Today’s incident responders are challenged on multiple fronts. First, the depth and complexity of attacks are significantly more advanced than commonly discussed. We can’t even say this is a recent trend – advanced attacks have existed for many years – but we do see them affecting a wider range of organizations, with a higher degree of specificity and targeting than ever before. It’s no longer merely the defense industry and large financial institutions that need to worry about determined persistent attackers. In the midst of this onslaught, the businesses we protect are using a wider range of technology – including consumer tools – in far more distributed environments. Finally, responders face the dual-edged sword of a plethora of tools; some of them are highly effective, and others that contribute to information overload. Before we dig into the gaps we need to provide a bit of context. First, keep in mind that we are focusing on larger organizations with dedicated incident response resources. Practically speaking, this probably means at least a few thousand employees and a dedicated IT security staff. Smaller organizations should still glean insight from this series, but probably don’t have resources to implement the recommendations. Second, these issues and recommendations are based on discussions with real incident response teams. Not everyone has the same issues – especially across large organizations – nor the same strengths. So don’t get upset when we start pointing out problems or making recommendations that don’t apply to you – as with any research, we generalize to address a broad audience. Across the organizations we talk with, some common incident response gaps emerge: Too much reliance on prevention at the expense of monitoring and response. We still find even large organizations that rely too heavily on their defensive security tools rather than balancing prevention with monitoring and detection. This imbalance of resources leads to gaps in the monitoring and alerting infrastructure, with inadequate resources for response. All organizations are eventually breached, and targeted organizations always have some kind of attacker presence. Always. Too much of the wrong kinds of information too early in the process. While you do need extensive auditing, logging, and monitoring data, you can’t use every feed and alert to kick off your process or in the initial investigation. And to expect that you can correlate all of these disparate data sources as an ongoing practice is ludicrous. Effective prioritization and filtering is key. Too little of the right kinds of information too early (or late) in the process. You shouldn’t have to jump right from an alert into manually crawling log files. By the same token, after you’ve handled the initial incident you shouldn’t need to rely exclusively on SIEM for your forensics investigation and root cause analysis. This again goes back to filtering and prioritization, along with sufficient collection. This also requires two levels of collection for your key device types – the first being what you can do continuously. The second is the much more detailed information you need to pinpoint root cause or perform post-mortem analysis. Poor alert filtering and prioritization. We constantly talk about false positives because those are the most visible, but the problem is less that an alert triggered, and more determining its importance in context. This ties directly to the previous two gaps, and requires finding the right balance between alerting, continuing collection of information for initial response, and gathering more granular information for after-action investigation. Poorly structured escalation options. One of the most important concepts in incident response is the capability to smoothly escalate incidents to the right resources. Your incident response process and organizations must take this into account. You just can’t effectively escalate with a flat response structure; tiering based on multiple factors such as geography and expertise is key. And this process must be determined well in advance of any incident. Escalation failure during response is a serious problem. Response whack-a-mole. Responding without the necessary insight and intelligence leads to an ongoing battle where the organization is always one step behind the attacker. While you can’t wait for full forensic investigations before clamping down on an incident to contain the damage, you need enough information to make informed and coordinated decisions that really stop the attack – not merely a symptom. So balancing hair-trigger response with analysis/paralysis is critical to ensure you minimize damage and potential data loss. *Your goal in incident response is to detect and contain attacks as quickly as possible – limiting the damage by constraining the window within the attacker operates.** To pull this off you need an effective process with graceful escalation to the right resources, to collect the right amount of the right kinds of information to streamline your process, to do ongoing analysis to identify problems earlier, and to coordinate your response to kill the threat instead of just a symptom. But all too often we see flat response structures, too much of the wrong information early in the process with too little of the right information late in the process, and a lack of coordination and focus that allow the bad guys to operate with near impunity once they establish their first beachhead. And let’s be clear, they have a beachhead. Whether you know about it is another matter. In our next couple posts Mike will start talking about what information to collect and how to define and manage your triggers for alerts. Then I’ll close out by talking about escalation, investigations, and intelligently kicking the bad guys out. Share:

Share:
Read Post

Research Agenda 2011: the Open Research Version

It’s time to post my research agenda for 2011. My long-winded Securosis compatriot has chosen a thematic approach to discussing coverage areas, and while it’s an excellent – and elegant – idea, I am getting lost amongst all of the elements presented. So unlike Mike, I won’t be presenting my coverage areas so artistically. Instead I will stick to a focus on the technology variants I hear customers askING about, as well as the trends I see within different sub-segments of the security industry. For the areas of security I cover, I know what customers ask us about, and I see a few evolving trends. Most have to do with Cloud – surprise! – and how to take advantage of cheap, plentiful resourses without getting totally hosed in the process. We are a totally transparent research firm, I will throw out some ideas and ask what you think are the most important. We try to balance what customers think is important, what we think is important, and what vendors think is important. It’s easy when the three overlap, but that is seldom the case. So I will carve out what I think we should cover, and ask you for your ideas and feedback. Cloud trends Logging in the Cloud: Cheap, fast, and easy usually wins; so cheap cloud resources coupled with basic logging services seem a key proposition for security and operations. We talked a lot about SIEM this year as there was lots of angst by SIEM customers looking to squeeze more value from their deployments while reducing costs. This year I see more firms moving operations to the cloud and needing to cut through the fog to determine what the frack is going on. Or what to store. Or how it should be secured. Web Application Security: Understanding and selecting a web application security program is the most popular research paper we have ever produced, and downloads remain very high two years after launch. Our intention is to either refresh that paper and relaunch – as the content is even more applicable today than it was then – or drill down into specific technologies such as Dynamic Web Application testing (black box & grey box) and WAF for in-house services and SaaS. Content Security: This umbrella covers email security, anti-spam, DLP (Lite), secure web gateways, global intelligence, and anti-virus. And yes, virus and spam are still a problem. And yes, the DLP features bundled with content security are ready for prime time. We have written a lot about content security, and when we did we were witnessing the evolution of SaaS and cloud based content security offerings. Now these are proven services. We plan to do a thorough job, producing Understanding and Selecting a Cloud Content Security solution. Consolidation and maturing market trends Quick Wins with Tokenization: Tokenization is one of the few technologies with serious potential to cut costs and simplify security. While adoption rates are still low, we get tons of inquiries. Our previous work in tokenization has outlined the available technology variants. We are looking at application of the technology and quick wins for adoption. PCI is the principal application and the use case is fairly simple despite multiple tokenization options, but the long term implications for health care data is both equally compelling and slightly more complicated. We believe that the mid market is moving towards SaaS based solutions, and enterprise customers to in-house software. Edge tokenization, tokenization adoption rates, PCI scope reduction, and fraud detection are all open topics. We are open to suggestions on how to focus this paper. Assessment: Much as we have seen a more holistic vision of where database security is headed, assessment vendors have evolved as well. We expect vendors to pitch different stories in order to differentiate themselves, but in this case each vendor genuinely has a different model for how assessment fits within the greater application security context. Internally, we have discussed a couple paper ideas on understanding the technologies, as well as a market update for the space as a whole. It’s been apparent for some time that the assessment market is going in slightly different directions – I see four separate visions! Which best matches enterprise customer requirements? Where is the assessment market headed? Totally confusing to customers trying to compare vendors and make sense of what would seem like a stable and mature segment. Emerging trends Building Security in: The single topic I believe benefits the most people is security in code development. Gunnar and I write a lot about how to build security into product development processes and have lots to say on the subject. “Quick Wins for Rugged”, “Agile Process Adjustments for Secure Code Development”, “Security Metrics in Code Development that Matter”, “Truth, Lies and Fiction with Application Security”, and last but not least, “Risk Management in Software Development” all merit research. Continuous Controls Monitoring: We are often asked questions by customers interested in compliance monitoring, and this one is near the top of the list. As security and compliance controls are scattered throughout the organization, and putting them under a single management umbrella. ADMP: We have discussed several ideas for updating the original Database Activity Monitoring paper, as well as the evolution of DAM from a product to a feature. Yes, I called it evolution. A couple years ago Rich blogged about where he felt database security and WAF market needed to go. He called this Application & Database Monitoring & Protection. Several companies have realized all or part of this vision and are starting to “take it to the next level”. But visions for how to leverage the technology are changing. Once again, several vendors offer different views of how the technology should be used. Virtualization of Internet Domains: There is a great deal of discussion of needing a new Internet for security reasons. And there a many services – SCADA and ATMs come to mind – that should never have been put on the Internet. And there are

Share:
Read Post

Friday Summary: December 17, 2010

I think we can firmly declare December 2010 the Month of Pwnage. Between WikiLeaks, Gawker, McDonalds, and Anonymous DDoS attacks, I’m not sure infosec has been in the news this much since the early days of big data breaches. Heck, I haven’t been in the news this much since I got involved with the Kaminsky DNS thing. To be honest, it’s a little refreshing to have a string of big stories that don’t involve Albert Gonzales. But here’s the thing I find so fascinating. In a very real sense, most of these high profile incidents are meaningless compared to the real compromises occurring daily out there. Our large enterprise clients are continuously compromised and mostly focusing on minimizing the damage. While everyone worries about Gawker passwords, local bad guys are following delivery trucks and stealing gifts off doorsteps – our local police nailed someone who hit a dozen houses and 50 gifts, and Pepper also had a couple incidents. I can no longer tell someone my profession without hearing a personal – generally recent – story of credit card or bank fraud. Heck, this week my bank teller described how a debit card she cut up months earlier was used for online purchases. But I guess none of that is nearly as interesting as Gizmodo and Lifehacker account compromises. Or DDoS attacks that don’t cause any real damage. And even that story became pretty darn funny when they tried to attack Amazon… which is sort of like trying to deflect the course of the Sun with a flock of highly-motivated carrier pigeons. I love my job. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich quoted in the Wall Street Journal. Rich also quoted by the AP on the Gawker hack… which made it into a couple hundred publications.. For the record I wasn’t trying to downplay the severity to Gawker, but to contrast vandalism-style attacks (however severe) against financially motivated ones. Some of the context was lost, and I can’t blame the journalist. Network Security Podcast, Episode 225. Mike quoted in Weighing Optimism vs. Pragmatism. Dark Reading on Gawker Goof. Favorite Securosis Posts David Mortman: Market Maturity and Security Competitive Advantage. Mike Rothman: Get over it. If we spent half the time doing stuff that we do bitching about it, a lot more would get done. Rich has it exactly right in this one. Adrian Lane: Market Maturity and Security Competitive Advantage. Not sure the title captures the essence, but an important lesson in how the security industry is shaped. Rich: Sigh. Everyone stole my fave (Market Maturity). I guess we should have written more this week. Other Securosis Posts React Faster and Better: Incident Response Gaps. Infrastructure Security Research Agenda 2011 – Part 4: Egress and Endpoints. Infrastructure Security Research Agenda 2011 – Part 3: Vaulting and Assurance. Incite 12/15/2010: It’s not a sprint…. Infrastructure Security Research Agenda 2011 – Part 2: Posturing and Reacting Faster/Better. Quick Wins with DLP Webinar. Favorite Outside Posts Rich: The Real Lessons Of Gawker’s Security Mess. Daniel nails it with some hype-free, useful in-depth coverage. Some serious pwnage here. Adrian Lane: DO NOT poke the bear. And the beauty is that it ends with 1. David Mortman: The Flawed Legal Architecture of the Certificate Authority Trust Model. Mike Rothman: Can’t measure love. xkcd via Chandler. We can’t measure everything, but we can measure some things. and that’s key to remember for 2011 planning. Pepper: Avast! Beware ‘pirates’!. I just wish ‘Avast’ could be the most ‘pirated’ software of all time, because the name is just too perfect. Research Reports and Presentations The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Network Security Operations Quant Metrics Model. Network Security Operations Quant Report. Understanding and Selecting a DLP Solution. Understanding and Selecting an Enterprise Firewall. Understanding and Selecting a Tokenization Solution. Top News and Posts Major Ad Networks Found Serving Malicious Ads. Backscatter X-Ray Machines Easily Fooled (pdf). Back door in HP network storage solution – Update. Mozilla Adding Web Applications to the Security Bug Bounty Program. Dancing Snowman storms its way across Facebook. OpenBSD has FBI backdoor, claims contractor. Most likely a hoax. Your email deserves due process. Over 500 patches for SAP. HeapLocker Tool Protects Against Heap-Spray Attacks. Twitter Spam Results from Gawker Leak. Gawker Password Pwnage. Microsoft to address IE, Stuxnet flaws. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Marisa, in response to Get over it. Only my dad calls it The BayThreat, Rich. :p Gal Shpantzer had a great talk at DojoCon also this weekend about the “Security Outliers” and using analogies from other health and safety industries to tackle the subjects of infosec education and adoption. Seems like there is hope out there, and when the security industry is as old as sterilization practices in hospitals we’ll be seeing more trickle down adoption. Share:

Share:
Read Post

Incite 12/15/2010: It’s not a sprint…

One of the issues of being a high achiever (at least in my own mind) is that you’re always in a rush. Half the time we don’t know where we’re going, but we need to get there fast. And it results in burn-out, grumpiness, and poor job performance – which is the worst thing for someone focused on achievement. A mentor of mine saw this tendency in me early on and imprinted a thought that I still think about often: “It’s not a sprint, Mike, it’s a marathon.” Man, those words speak the truth. Rich’s post on Monday urging us to Get over it is exactly right. It made me think about sprints and marathons and also the general psyche of successful security folks. We are paranoid, we are cynical, we expect the worst in people. We have to, it’s our job. But do this long enough and you can lose faith. I think that’s what Rich is referring to, especially at the end of yet another year where the bad guys won, whatever that means. So this is the deal. Remember this is a marathon. The war is not won or lost with one battle (unless you take a spear to the chest, that is). The bad guys will continue to innovate. Assuming you are a good guy/gal, you’ll struggle all year to catch up and still not get there. Yes, most of sleeping at night as a security person involves accepting that our job is Sisyphean. We will always be pushing the rock up the hill. And we’ll never get there. It’s about learning to enjoy the battle. To appreciate the small victories. And to let it go at the end of the day and go home with no regret. I know folks like to vent on Twitter and write inflammatory blog posts because they can commiserate with all their cynical buddies and feel like they belong. Believe me, I get that. But I also know a lot of these folks pretty well, and most love the job (as dysfunctional as it is) and couldn’t think of doing anything else. But if you are one of those who can’t get past it, I suggest you spend some time over the holidays figuring out whether security is the right career path for you. It’s okay if it’s not. Really. What’s not okay is squandering the limited time you have on something that makes you miserable. Photo credits: “Day 171” originally uploaded by Pascal Incite 4 U Anti-Exploitation works. Who knew? Rich has been talking about anti-exploitation defenses on endpoints for a long time. I added a bit in Endpoint Security Fundamentals, but the point has been that we need to make it harder (though admittedly never impossible) for hackers to attack memory. Now Microsoft itself has a good analysis of the effectiveness of DEP and ASLR and their value – both alone and together. Clearly these controls will stop some attacks, but not all, so don’t get lulled into a false sense of security because you leverage these technologies where possible. They are a good start, but you aren’t done. You’re never done, but you already know that. – MR Out with the old: Gunnar Peterson asks: Is your site more secure than Gawker? – covering the iceberg of password reuse across sites, but also stating that passwords are intrinsically unsafe. Sure, they provide all or nothing access, but I don’t think the discussion should center on the damage caused by bad passwords. I’d say we know that. Instead we should use alternatives we could actually implement to fight this trend. Passwords are like statistics in baseball, in that they have been around so long they are taken for granted; and additionally because most IT professionals can’t wrap their heads around the concept of life without passwords. Bill Cheswick gave a great presentation at OWASP 2010 in Irvine, with evidence on why passwords are unnatural devices, tips on improving password policies, and most importantly alternative methods for establishing identity (26:30 in) such as Passfaces, Illusion, Passmaps, and other types of challenge/response. Many of these alternatives avoid storing Gunnar’s proverbial land mine. – AL IE9 puts a cap in the drive-by: We all know Microsoft Internet Explorer security sucks, right? I mean that’s what I read in all the Slashdot comments. Too bad the latest NSS Labs report shows exactly the opposite. NSS hired some alcoholic, porn, and gambling obsessed rhesus monkeys to browse all the worst of the Internet for a few days and see which browsers showed the best defenses against drive-by and downloadable malware. The winner? IE9 (beta) with a 99% success rate, followed by IE8 at 90%, then Firefox at… 19%. They did test Firefox without our recommended NoScript and other security enhancing plug-ins, but that accurately reflects how the great unwashed surf the web. Despite being a Mac fanboi, for a couple years now I’ve been doing all my banking on a Win7 system with IE8/9. It’s nice to see numbers back up my choice. – RM Fox in the henhouse alert: Speaking of anti-malware tests, it seems the endpoint security vendors are banding together to reset the testing criteria, with the willing participation of ICSA Labs. To be clear, this is a specific response to the tests that NSS Labs has been running which make all the endpoint vendors look pretty bad. So why not work with a respected group like ICSA to redefine the testing baseline, since the world changed? Conceptually it’s a good idea, in practice… we’ll see. I have a lot of friends at ICSA, so I don’t want to be overly negative out of the gate, but let’s just say I doubt any of the baseline tests will make mincemeat out of the endpoint security suites. And thus they may not reflect real world use. You can quibble with NSS and their anti-malware testing methodology, but whatever they are doing is working, as demonstrated by the EPP vendors uniting against

Share:
Read Post

Infrastructure Security Research Agenda 2011—Part 3: Vaulting and Assurance

Getting back to our Infrastructure Security Research Agenda for 2011 (Part 1: Positivity, Part 2: Posturing and RFAB), let’s now turn our attention to two more areas of focus. The first is ‘vaulting’, a fancy way of talking about network segmentation with additional security controls based on what you are protecting. Then we’ll touch on assurance, another fancy term for testing your stuff. Vaulting As I described in my initial post on the topic, this is about network segmentation and designing specific control sets based on the sensitivity of the data. Many folks have plenty of bones to pick with the PCI Data Security Standard (DSS), but it has brought some pretty good security practices into common vernacular. Network segmentation is one; another is identifying critical data and then segregating it from general purpose (less sensitive) data. Of course, PCI begins and ends with cardholder data, and odds are there’s more to your business. But the general concepts of figuring out what is important (‘in-scope’, in PCI parlance), making sure only folks who need access to that data have it, and then using all sorts of controls to make sure it’s protected, are goodness. These concepts can and should be applied across all your data, and that’s what vaulting is about. In 2011, we’ll be documenting a lot of what this means in practical terms, given that we already have lots of gear that needs to evolve (like IDS/IPS), as well as additional device types (mobile) that fundamentally change who has access to our stuff and from where. We can’t boil the ocean, so our research will happen in stages. Here are some ideas for breaking down the concepts: Implementing a Trusted Zones Program: This project focuses on how to implement the vaulting (trusted zones) concept, starting with defining and then classifying the data. Next design the control sets for each level of sensitivity. And finally implement network segmentation with the network ops team. It also includes a discussion of keeping data definitions up to date and control sets current. IDS/IPS Evolution: Given the evolution towards application aware firewalls (see Understanding and Selecting an Enterprise Firewall), the role of the traditional network-based IDS/IPS must and will clearly evolve. But the reality is there are millions of customers using these capabilities, so they are not going away overnight. This research will help customers understand how their existing IDS/IPS infrastructure will play in this new world order, and how end users need to think about intrusion prevention moving forward. Protecting Wireless: Keep in mind that we are still dealing with the ingress aspects, but pretty much all organizations have some kind of wireless networks in their environments, so we need to document ways to handle them securely and how the wireless infrastructure needs to play with other network security controls. There are many compliance issues to deal with as well, such as avoiding WEP. Yes, combining the Positivity and Vaulting concepts does involve a significant re-architecture/re-deployment of network security over the next few years. You didn’t really think you were done, did you? Security Assurance One of the areas I’ve been all over for the past 5 years is the need to constantly be testing our defenses. The bad guys are doing this every day, so we need to also. If only to know what they are going to find. So I’m a big fan of penetration testing (using both humans and tools) and think we collectively need to do a better job of understanding what works and what doesn’t. There are many areas to focus on for assurance. Here are a few ideas for interesting research that we think could even be useful: Scoping the Pen Test: Many penetration tests fail because they aren’t scoped to be successful. This research project will focus on defining success and setting the ground rules to get maximum impact from a pen test and if/when to pull the plug if internal buy-in can’t be gained. Automating Pen Testing: We all seem to be fans of tools that automate pen tests, but why? We’ll dig deeply into what these tools do, how to use them safely, what differentiates the offerings, and how to use them systematically to figure out what can really be exploited, as opposed to just vulnerable. As you can see, there is no lack of stuff to write about. Next we’ll turn the tables a little and deal with the egress research ideas we are percolating. Share:

Share:
Read Post

Market Maturity and Security Competitive Advantage

One advantage of my background is that I’ve used and marketed/sold security products, as well as followed the industry for a long time, so I see patterns over and over again. But before I jump into that, you all need to head over to Lenny Zeltser’s blog. He’s doing a lot of writing, and given the general lameness of the rest of us security bloggers, it’s nice that we have a new victim thought leader to peruse. Lenny is doing a series now on defining Competitive Advantage for Security Products. The posts deal with Ease of Use and Price. As you would expect, I have opinions on this topic. I see both as indications of product/category maturity. I don’t necessarily want to delve into the entire adoption curve for security products, but suffice it to say most innovative products are narrowly defined and targeted towards an enterprise-class customer. Why? Enterprises have the money to pay way too much for way too little capability, which half the time doesn’t even work. But they’ve got small problems on large enough scales that they’ll write big checks on the faint hope of plugging in a box and making the issue go away. Over time, products/categories either solve problems or they don’t. If they make the cut, interest starts to develop in smaller companies that likely have the problem (though not at the same scale), but not the money to write big checks. Smaller companies also tend to be less technically sophisticated than a typical enterprise. Of course that is a crass overgeneralization, but at minimum an enterprise has resources to throw at the problem. So a product with a crappy user experience usually doesn’t deter them. They’ve got folks to figure it out. Smaller companies, not so much. Which is why as a product/category matures, and thus becomes more applicable to a smaller company market segment, the focus turns quickly to ease of use and price. Small companies need a streamlined user experience and don’t want to pay a lot. So they don’t. I lived through this in the anti-spam business. In its early days, customers (mostly on the enterprise) wanted lots of knobs and dials to tune their catch rates (and keep their people busy and employed). At some point customers got tired of endless configuration, so they opted for better user experience. Early leaders which couldn’t dumb down their products suffered (yes, I still have road rash from that). At the same time, Barracuda introduced a device for about 10% of the typical price of an anti-spam gateway. Price wasn’t just a differentiator here, it was a disruptor. $50K non-competitive deals because $10K crapfest. It’s hard to grow a business exponentially when you have to compete for 20% of the revenue you previously got. Right, not a lot of fun. And now managed anti-spam services provide an even easier and more cost effective option, so guess where many customers are moving their spending? I agree with Lenny that ease of use and price can be used for competitive advantage. But only if the market is mature enough. A low-cost DLP or SIEM (as opposed to log management) tool won’t be successful because the products are not easy enough to use. So for end users buying a lot of this technology, keep your expectations on price and ease of use in alignment with market maturity and you can find the right product for your environment, regardless of what size you are. Share:

Share:
Read Post

Infrastructure Security Research Agenda 2011—Part 2: Posturing and Reacting Faster/Better

The first of my Infrastructure Security Research Agenda 2011 posts, introducing the concept of positivity, generated a lot of discussion. Not only attached to the blog post (though the comments there were quite good), but in daily discussions with members of our extended network. Which is what a research agenda is really for. It’s a way to throw some crap against the wall and see what sticks. Posturing So let’s move on to the next aspect of my Ingress research ideas for the next year. It’s really not novel, but considering how awful most organizations are about fairly straightforward blocking and tackling, it makes sense to keep digging into this area and continue publishing actionable research to help practitioners become a bit less awful. I’m calling this topic area Posturing because it’s really about closing the doors, battening down the windows, and making sure you are ready for the oncoming storm. And yes, it’s storming out there. We did talk about this a bit in the Endpoint Security Fundamentals series under Patching and Secure Configurations. There are three aspects of Posturing: Vulnerability Management: Amazingly enough, we haven’t yet written much on how to do vulnerability management. So we’ll likely focus on a short fundamentals series, and follow up with a series on Vulnerability Management Evolution, because with the advent of integrated application and database scanning – combined with the move towards managed services for vulnerability management – there are plenty of things to talk about. Patching: No it’s not novel, but it’s still a critical part of the security/ops guy’s tool box. As the tools continue to commoditize, we’ll look at what’s important and how patching can & should be used as a stepping stone to more sophisticated configuration management. The process (laid bare in Patch Management Quant) hasn’t changed, but we’ll have some thoughts on tool evolution for 2011. Configuration Policy Compliance: Pretty much all the vulnerability management players are looking at auditing device configurations and comparing reality to policy as a logical extension of the scans they already do. And they are right, to a point. In 2011 we’ll look at this capability as leverage on other security operational functions. We’ll also document the key capabilities required for security and an efficiency – beyond managing configuration changes for policy compliance. To be honest I’m not crazy about the term Posturing, but I couldn’t think of anything I liked better. This concept really plays into two aspects of our security philosophy: Reduce attack surface: A configuration policy with solid vulnerability/configuration/patching operations help close the holes used by less sophisticated attackers. Positivity falls into this bucket as well, by restricting the types of traffic and executables allowed in our environments. React faster: By watching for configuration changes, which can indicate unauthorized activity on key devices (generally not good), you put yourself in position to see attacks sooner, and thus to respond faster. Yes, we are doing a lot of research into what ‘response’ means here, but Posturing can certainly be key to making sure nothing gets missed. React Faster and Better We beat this topic to death in 2010, so I’m not going to reiterate a lot of that research beyond pointing to the stuff we’ve already done: Understanding and Selecting SIEM/Log Management Monitoring up the Stack Incident Response Fundamentals We’re also working on the successor to Incident Response Fundamentals in our React Faster and Better series. That should be done in early January, and then we’ll focus our research in this area on implementation and success, which means a few Quick Wins series. These will probably include: Quick Wins with Network Monitoring: You know how I love monitoring, and clearly understanding and factoring network traffic into security analysis can yield huge dividends. But how? And how much? Quick Wins with Security Monitoring: Deploying SIEM and Log Management can be a bear, so we’ll focus on making sure you can get quick value from any investment in this area, as well as ensuring you are setting yourself up for a sustainable implementation. We have learned many tricks over the past few years (particularly from folks who have screwed this up), so it’s time to share. Once much of this research is published, we’ll have a pretty deep treatment of our React Faster and Better concept. Share:

Share:
Read Post

Quick Wins with DLP Webinar

Back in April I published a slightly different take on DLP: Low Hanging Fruit: Quick Wins with Data Loss Prevention. It was all about getting immediate value out of DLP while setting yourself up for a full deployment. On Wednesday at 11:30am EST I’ll be giving a free presentation on that material. If you’re interested, you can register at the Business of Information Security site. Share:

Share:
Read Post

Get over It

Over the weekend I glanced at Twitter and saw a bit of hand-wringing inspired by something going on at (I think) the Baythreat in California. This is something that’s been popping up quite a bit on Twitter and in blog posts for a while now. The core of the comments centered on the problem of educating the unwashed security masses, combined with the problems induced by a compliance mentality, and the general “they don’t understand” and “security is failing” memes. (Keep in mind I’m referring to a bunch of comments over a period of time, and not pointing fingers because I’m over-generalizing). My response? You can probably figure it out from the title of this post. I long ago stopped worrying about the big picture. I accepted that some people understand security, some don’t, and we all suffer from deformation professionnelle (a cognitive bias: losing the broader perspective due to our occupation). In any risk management profession it’s hard to temper our daily exposure to the worst of the worst with the attitudes and actions of those with other priorities. I went through a lot of similar hand-wringing first in my physical security days, and then with my rescue work. Ask any cop or firefighter and you’ll see the same tendencies. We need to keep in mind that others won’t always share our priorities, no matter how much we explain them, and no matter how well we “speak in the language of business”. The reality is that unless someone suffers noticeable pain or massive fear, human nature will limit how they prioritize risk. And even when they do get hit, the changes in thought from the experience fade over time. Our job is to keep slogging through; doing our best to educate as we optimize the resources at our disposal and stay prepared to sweep in when something bad happens and clean up the mess. Which we will then probably be blamed for. Thankless? Only if you want to look at it that way. Does it mean we should give up? No, but also don’t expect human nature to change. If you can’t accept this, all you will do is burn yourself out until you end up as an alcoholic passed out behind a dumpster, naked, with your keys up your a**. Fight the good fight. But only if you can still sleep well at night. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.