Securosis

Research

Security Management 2.5: Evaluating the Incumbent

To explain the importance of picking a platform, rather than a product, our last post compared Log Management to SIEM, like the difference between using kitchen appliances and running a machine shop. One is easy to use, but limited in applicability; the other requires more work on your part, but can accomplish much more. Our goal was to contrast use cases and levels of expectations between the two product classes; despite lower overall platform satisfaction and the greater amount of work required, SIEM is what many customers need to get their work done. Pushing the boundaries of what is possible involves some pain. Customers grumble about the tremendous growth in event collection driven by all these new devices, but they need to collect nearly every event type, and often believe they need real-time response. The product had better be fast and provide detailed forensic audits. Customers depend on compliance reports for their non-technical audience, along with detailed operational reports for IT. SIEM customers have a daily yin/yang battle – between automation and generic results; between efficiency and speed; between easy and useful. Again, dissatisfaction is to be expected, but this exercise is to get real work done with a balky product. SIEMulation To illustrate why customers go through the re-evaluation process, here are some excerpts from customer conversations: “We had some data leakage a couple years ago; nothing serious, but it was a partner who discovered the issue. It took some time to determine why we did not see the activity with SIEM and other internal security systems. Needless to say our executive team was not happy, and wanted us to justify our security expenditures. Actually they said “Why did we not see this? What the hell are we paying for?” Our goal is to be able to get detection working the way we need it to work, and that means full packet capture and analysis. As you know, that means a lot more data, and we need longer retention periods as well.” “We upgraded from log management to SIEM two years ago in order to help with malware detection and to scale up general security awareness. The new platform is supposed to scale, but we don’t actually know if it does scale yet because we are still rolling it out. Talk to me again in a couple years – I ought to have it done by then.” “I want security analytics. I have systems to measure supply chain efficiency. I have business risk analysis systems. I want the same view into operational and security risk, but I can’t blend the analysis capabilities from these other platforms with the SIEM data. Our goal is to have the same type of analysis everywhere, and eventually a more unified system.” When it comes to evaluating your current platform, you need to think about the issue from two perspectives. First formally evaluate how well your platform addresses your current and foreseeable requirements in order to quantify critical features you depend on and identify significant deficiencies. Second, look at evolving use cases and the impact of newer platforms on operations and deployment – both good and bad. Just because another vendor offers more features and performance does not mean you should replace your SIEM. The grass is not always greener on the other side. The former is critical for the decision process later in this series; the latter is essential for analyzing the ramifications of a replacement decision. Sizing up the incumbent The first step in the evaluation process uses the catalog of requirements you built already to critically assess how the current SIEM platform achieves your needs. This means spelling out each business function, how critical it is, and whether the current platform gets it done. You will need to discuss these questions with stakeholders from operations, security, compliance, and any other organizations that participate in the management of SIEM or take advantage of it. You cannot make this decision in a vacuum, and lining up support early in the process will pay dividends later on. Trust us on this. Operations will be the best judge of whether the platform is easy to maintain and the complexity of implementing new policies. Security will have the best understanding of the product or service’s forensic auditing capabilities. Compliance teams can judge suitability of reports for audit preparation. And an increasingly common contributor is risk and/or data analysts who mine information and help prioritize allocation of resources. Each audience provides a unique perspective on the criticality of some function and the effectiveness of the current platform. At this point you have already examined your requirements so you should understand what you have, what you want, and the difference between the two. In some cases you will find that the incumbent platform simply does not fill a hard requirement – which makes the analysis easy. In other cases the system works perfectly, but is a nightmare in terms of maintenance and care & feeding for any system or rule changes. Performance may be less than ideal, but it’s not necessarily clear what that really means, because any system could always be faster when investigating a possible breach. It may turn out the SIEM functions as designed but lacks the capacity to keep up with all the events you need to collect, or takes too long to generate actionable reports. Act like a detective, collecting these tidbits of information, no matter how small, to build the story of the existing SIEM platform in your environment. This information will come into play later when you weigh options, and we recommend using a format that makes it easy to compare and contrast issues. Security, compliance, management, integration, reporting, analysis, performance, scalability, correlation, and forensic analysis are all areas you need to evaluate in terms of your revised requirements. Prioritization of existing and desired features helps streamline the analysis. We reiterate the importance of staying focused on critical items to avoid “shiny object syndrome” driving you to select the pretty new

Share:
Read Post

Security Management 2.5: Revisiting Requirements

Given the evolution of SIEM technology and the security challenges facing organizations, it is time to revisit requirements and use cases. This is an essential part of the evaluation process. You need a fresh and critical look at your security management environment to understand what you need today, how that will change tomorrow, and what kinds of resources and expertise you can harness – unconstrained by your current state. While some requirements may not have changed all that much (such as ease of management and compliance reporting), as we described earlier in this series, the way we use these systems has changed dramatically. That is our way of saying it is good to start with a laundry list of things you would like to be able to do, but cannot with your current system. And while you are thinking about shiny new capabilities, don’t forget the basic day-to-day operational stuff a SIEM does. Finally, you need to consider what is coming down the road in terms of business challenges (and the resulting security issues) that will emerge over the next couple years. None of us has a crystal ball, but critical business imperatives should give you a foundation for figuring out how things need to change. A fresh start Some organizations choose to take a fresh look at their security management infrastructure every so often, while others have the choice thrust upon them. For instance if your organization was breached or infected by malware and your SIEM platform failed to detect it, you need to take a critical look at your security management environment. The current platform may be adequate, or it might be a dog – and you personally might not have even chosen it – but keep in mind that your success is linked to how well your platform meets your requirements. If things go south, blaming your predecessor for choosing a mediocre SIEM won’t save your job. You also need to face the reality that other groups within the organization have differing needs for the SIEM. Operations only cares that they get the metrics they need, compliance teams only care about getting their reports, and upper management only cares about pushing blame downhill when the company is pwned by hackers. It’s time to roll up your sleeves and figure out what you need. Every so often it makes sense to critically look at what works and what doesn’t from the standpoint of security management. To find out the best path forward, we recommend starting with the proverbial blank slate. It is helpful to consider the your priorities when you selected the system in the first place, to illuminate how your environment has changed over time and help understand the amount of change to expect in the future. To be more specific, use this opportunity to revisit the priorities of your requirements and use cases for each of the three main drivers for security management spending: improving security, increasing efficiency, and automating compliance. It’s all about me Setting requirements is all about you, right? It’s about what you need to get done. It’s about how you want to work. It’s about what you can’t do – at least easily – today. Well, not quite. We jest to make a point: you need to start with a look inward at what your company needs – rather than getting distracted by what the market is offering today. This requires taking a look at your organization, and the other internal teams that use the SIEM. Once your team is clear abou your own requirements, start to discuss requirements with external influencers. Assuming you work in security, you should consult ops teams, business users, compliance, and perhaps the general counsel, about their various requirements. This should confirm the priorities you established earlier, and set the stage for enlisting support if you decide to move to a new platform. Our research has shown that organizational needs remain constant, as mentioned above: improve security, improve efficiency, and support compliance. But none of these goals has gotten easier. The scale of the problem has grown, so if you have stood still and have not added new capabilities… you actually lost ground. For example, perhaps you first implemented a Log Management capability to crank out compliance reports. We see that as a common initial driver. But as your organization grew and you did more stuff online, you collected more events, and now need a much larger platform to aggregate and analyze all that data. Or perhaps you just finished cleaning up a messy security incident which your existing SIEM missed. If so you probably now want to make sure correlation and monitoring work better, and that you have some kind of threat intelligence to help you know what to look for. Increasingly SIEM platforms monitor up the stack – collecting additional data types including identity, Database Activity Monitoring, application support, and configuration management. That additional data helps isolate infrastructure attacks, but you cannot afford to stop there. As attacks target higher-level business processes and the systems that automate them, you will need visibility beyond core infrastructure. So your security management platform needs to detect attacks in the context of business threats. Don’t forget about advanced forensics – it would be folly to count on blocking every attack. So you will probably rely on your security management platform to help React Faster and Better with incident response. You might also be looking for a more integrated user experience across a number of security functions to improve efficiency. For example you might have separate vendors for change detection, vulnerability management, firewall and IDS monitoring, and Database Activity Monitoring. You may be wearing out your swivel chair switching between all those consoles, and simplification through vendor consolidation might be a key driver as you revisit your requirements. Don’t be hung up on what you have – figure out what you need now. Do a little thinking about what would make your life a lot easier, and use those ideas

Share:
Read Post

Security Management 2.5: Platform Evolution

This post discusses evolutionary changes in SIEM, focusing on how underlying platform capabilities have evolved to meet the requirements discussed in the last post. To give you a sneak peek, it is all about doing more with more data. The change we have seen in these platforms over the past few years has been mostly under the covers. It’s not sexy, but this architectural evolution was necessary to make sure the platforms scaled and could perform the needed analysis moving forward. The problem is that most folks cannot appreciate the boatload of R&D which has been required to enable many platforms to receive a proverbial brain transplant. We will start with the major advancements. Architectural Evolution To be honest, we downplayed the importance of SIEM’s under-the-hood changes in our previous paper. The “brain transplant” was the significant change that enabled a select few vendors to address the performance and scalability issues plaguing the first generation of the platforms, which were built on RDBMS. For simplicity’s sake we skipped over the technical details of how and why. Now it’s time to explore that evolution. The fundamental change is that SIEM platforms are no longer based on a centralized massive service. By leveraging a distributed approach, using a cooperative cluster of many servers independently collecting, digesting, and processing events, policies are distributed across multiple systems to more effectively and efficiently handle the load. If you need to support more locations or pump in a bunch more data, just add nodes to the cluster. If this sounds like big data that’s because it essentially is. Several platforms leverage big data technologies under the hood. The net result is parallel event processing resources deployed ‘closer’ to event sources, faster event collection, and systems designed to scale without massive reconfiguration. This architecture enables different deployment models; it also better accommodates distributed IT systems, cloud providers, and virtual environments – which increasingly constitute the fabric of modern technology infrastructure. The secret sauce making it all possible is distributed system management. It is easy to say “big data”, but much harder to do heavy-duty security analysis at scale. Later, when we discuss proof-of-concept testing and final decision-making, we will explore substantiating these claims. The important parts, though, are the architectural changes to enable scaling and performance, and support for more data sources. Without this shift nothing else matters. Serving Multiple Use Cases The future of security management is not just about detecting the advanced threats and malware, although that is the highest-profile use case. We still need to get work done today, which means adding value to the operations team, as well as to compliance and security functions. This typically involves analyzing vulnerability assessment information so security teams can ensure basic security measures are in place. You can analyze patch and configuration data similarly to help operations teams keep pace within dynamic – and increasingly virtual – environments. We have even seen cases where operations teams detected application DoS attacks through infrastructure event data. This kind of derivative security analysis is the precursor to allowing risk and business analytics teams to make better business decisions – to redeploy resources, take applications offline, etc. – by leveraging data collected from the SIEM. Enhanced Visibility Attackers continually shift their strategies to evade detection, increase efficiency, and maximize the impact of their attacks. Historically one of SIEM’s core value propositions has been an end-to-end view, enabled by collecting all sorts of different log files from devices all around the enterprise. Unfortunately that turned out not to be enough – log files and NetFlow records rarely contain enough information to detect or fully investigate an attack. We needed better visibility into what is actually happening within the environment – rather than expecting analysts to wade through zillions of event records to figure out when you are under attack. We have seen three technical advances which, taken together, provide the evolutionary benefit of much better visibility into the event stream. In no particular order they are more (and better) data, better analysis techniques, and better visualization capabilities. More and Better Data: Collect application events, full packet capture – not just metadata – and other sources that taxed older SIEM systems. In many cases the volume or format of the data was incompatible with the underlying data management engine. Better Analysis: These new data sources enable more detailed analysis, longer retention, and broader coverage; together those improved capabilities provide better depth and context for our analyses. Better Visualization: Enhanced analysis, combined with advanced programmatic interfaces and better visualization tools, substantially improves the experience of interrogating the SIEM. Old-style dashboards, with simplistic pie charts and bar graphs, have given way to complex data representations that much better illuminate trends and highlight anomalous activity. These improvements might look like simple incremental improvements to existing capabilities, but combined they enable a major improvement in visibility. Decreased Time to Value The most common need voiced by SIEM buyers is to have their platforms provide value without requiring major customization and professional services. Customers are tired of buying SIEM toolkits, and then needing to take time and invest money to build a custom SIEM system tailored to their particular environment. As we mentioned in our previous post, collecting an order of magnitude more data requires a similar jump in analysis capabilities – the alternative is to be drowned in a sea of alerts. The same math applies to deployment and management – monitoring many more types of devices and analyzing data in new ways means platforms need to be easier to deploy and manage simply to maintain the old level of manageability. The good news is that SIEM platform vendors have made significant investments to support more devices and offer better installation and integration, which combined make deployment less resource intensive. As these platforms integrate the new data sources and enhanced visibility described above, the competitiveness of a platform can be determined by the simplicity and intuitiveness of its management interface, and the availability of out-of-the-box policies and

Share:
Read Post

Security Management 2.5: Changing Needs

Today’s post discusses the changing needs and requirements organizations have for security management customers, which is just a fancy way of saying “Here’s why customers are unhappy.” The following items are the main discussion points when we speak with end users, and the big picture reasons motivating SIEM users to consider alternatives. The Changing Needs of Security Management Malware/Threat Detection: Malware is by far the biggest security issue enterprises face today. It is driving many of the changes rippling through the security industry, including SIEM and security analytics. SIEM is designed to detect security events, but malware is designed to be stealthy and evade detection. You may be looking for malware, but you don’t always know what it looks like. Basically you are hunting for anomalies that kinda-sorta could be an attack, or just odd stuff that may look like an infection. The days of simple file-based detection are gone, or at least anything resembling the simple malware signature of a few years ago. You need to detect new and novel forms of advanced malware, which requires adding different data sources to the analysis and observing patterns across different event types. We also need to leverage emerging security analytics capabilities to examine data in new and novel ways. Even if we do all this, it might still not be enough. This is why feeding third-party threat intelligence feeds into the SIEM are becoming increasingly common – allowing organizations to look for attacks happening to others. Cloud & Mobile: As firms move critical data into cloud environments and offer mobile applications to employees and customers, the definition of system now encompasses use cases outside the classical corporate perimeter, changing the definition and scope of infrastructure to monitor. Compounding the issue is the difficulty in monitoring mobile devices – many of which you do not fully control, and it’s even harder because of the lack of effective tools to gather telemetry and metrics from the devices. Even more daunting is the lack of visibility (basically log and event data) of what’s happening within your cloud service providers. Some cloud providers cannot provide infrastructure logs to customers, because their event streams combine events from all their customers. In some cases they cannot provide logs because there is simply no ‘network’ to tap because it’s virtual, and existing data collectors are useless. In other cases the cloud provider is simply not willing to share the full picture of events, and you may be prohibited contractually from capturing events. The net result is that you need to tackle security monitoring and event analysis in a fundamentally different fashion. This typically involves collecting the events you can gather (application, server, identity and access logs) and massaging them into your SIEM. For Infrastructure as a Service (IaaS) environments, you should look at adding your own cloud-friendly collectors in the flow of application traffic. General Analytics: If you collect a mountain of data from all IT systems, much more information is available than just security events. This is a net positive, but cuts both ways – some event analysis platforms are set up for IT operations first, and both security and business operations teams piggyback off that investment. In this case analysis, reporting, and visualization tools must be not just accessible to a wider audience (like security), but also optimized to do true correlation and analysis. What Customers Really Want These examples are what we normally call “use cases”, which we use to reflect the business drivers creating the motivation to take action. These situations are significant enough (and they are unhappy enough) for customers to consider jettisoning current solutions and going through the pain of re-evaluating their requirements and current deficiencies. Those bullet points do represent the high-level motivations, but they fail to tell the whole story. These are the business reasons firms are looking, but they fail to capture why many of the current platforms fail to meet expectations. For that we need to take a slightly more technical look at the requirements. Deeper Analysis Requires More Data To address the use cases described above, especially for malware analysis, more data is required. This necessarily means more event volume – such as capturing and storing full packet streams, if even for only a short period. It also means more types of data – such as human-readable data mixed in with machine logs and telemetry from networks and other devices. It includes gathering and storing complex data types; such as binary or image files; which are not easy to parse, store, or even categorize. The Need for Information Requires Better and More Flexible Analysis Simple correlation of events – who, what, and when – is insufficient for the kind of security analysis required today. This is not only because those attributes are insufficient to distinguish bad from good, but also because data analysis approaches are fundamentally evolving. Most customers we speak with want to profile normal traffic and usage; this profile helps understand how systems are being used, and also helps detect anomalies likely to indicate misuse. There is some fragmentation in how customers use analysis – some choose to leverage SIEM for more real-time altering and analysis; others want big picture visibility, created by combining many different views for an overall sense of activity. Some customers want fully automated threat detection, while others want more interactive ad hoc and forensic analysis. To make things even harder for the vendors, today’s hot analysis methods could very well be irrelevant a year or two down the road. Many customers want to make sure they can update analytics as requirements develop – optimized hard-wired analytics are now a liability rather than an advantage for the products. The Velocity of Attacks Requires Threat Intelligence When we talk about threat intelligence we do not limit the discussion to things like IP reputation or ‘fingerprint’ hashes of malware binaries – those features are certainly in wide use, but the field of threat intelligence includes far more. Some threat intelligence feeds look at

Share:
Read Post

Security Management 2.5: Replacing Your SIEM Yet? [New Series]

Security Information and Event Management (SIEM) systems create a lot of controversy with security folks; they are one of the cornerstones on which the security program are built upon within every enterprise. Yet, simultaneously SIEM generates the most complaints and general angst. Two years ago Mike and I completed a research project on “SIEM 2.0: Time to Replace your SIEM?” based upon a series of conversations with organizations who wanted more from their investment. Specifically they wanted more scalability, easier deployment, and the ability to ‘monitor up the stack’ in context of business applications and better integration with enterprise systems (like identity). Over the past two years the pace of customer demands and platform evolution to meet those demands has accelerated. What we thought was the tail end of a trend with second-generation SIEMs improving scalability using purpose-built data stores turned out to be the tip of the iceberg. As enterprises wanted to analyze more types of data, from more sources, with more – re: better – analysis capabilities to derive better information to keep pace with advanced attackers. Despite solid platform upgrades from a number of SIEM vendors, these requirements have blossomed faster than their vendor could respond. And sadly, some security vendors marketed “advanced capabilities” when it was really the same old pig in a new suit, causing further chagrin and disappointment amongst their customers. Whatever the reason, here we are two years later, listening to the same tale from customers looking to replace their SIEM (again) given these new requirements. You may feel like Bill Murray in Groundhog Day, reliving the past over and over again, but this time is different. The requirements have changed! Actually they have. The original architects of the early SIEM platforms could not have envisioned the kind of analysis required to detect attacks designed to evade SIEM tools. The attackers are thinking differently, and that means the defenders that want to keep pace need to rip up their old playbook and very likely critically evaluate their old tools as well. Malware is now the major driver, but since you can’t really detect advanced attacks anymore based on a file signature, you have to mine data for security information in a whole new way. Cloud computing and mobile devices are disrupting the technology infrastructure. And the collection and analysis of these and many other data streams (like network packet capture) are bursting the seams of SIEM. It doesn’t just stop at security alerting either. Other organizations, from IT operations to risk to business analytics, also want to mine the security information collected looking for new ways to streamline operations, maintain availability, and optimize the environment. Moving forward, you’ll need to heavily leverage your investments in security monitoring and analysis technologies. If that resource can’t be leveraged, enterprises will move on and find something more in line with their requirements. Given the rapid evolution we’ve seen in SIEM/Log Management over the past 4-5 years, product obsolescence is a genuine issue. The negative impact of a product that has not kept pace with technical evolution and customer requirements cannot be trivialized. This pain becomes more acute in the event of a missed security incident because the SIEM did not collect the requisite information, or worse, could not detect the threat. Customers spend significant resources (both time and money) on the care and feeding of their SIEM. If they don’t feel the value is in alignment with the investment, again they’ll move on and search for better, easier, and faster products. It’s realistic, if not expected, that these customers start questioning whether the incumbent offering makes sense for their organization moving forward. Additionally, firms are increasingly considering managed services and 3rd party security operations providers to address skills and resource shortages within internal groups. Firms simply don’t have the internal expertise to look for advanced threats. This skills gap also promises to reshape the landscape of security management, so we’ll kick off the series discussing these factors, setting the stage to update our guide to selecting a SIEM. Specifically, we will cover the following topics: The Changing Needs of Security Management: As firms branch into cloud environments and offer mobile applications to their employees and customers, the definition of ‘system’ now encompasses use cases outside what’s long been considered the corporate perimeter, changing the view of “infrastructure” that needs to be monitored. Simultaneously, advanced malware attacks now requires more types of data, threat intelligence and polices to adequately detect these attacks. Additionally, firms are increasingly considering managed services and 3rd party security operations to address skills and resource shortages within internal groups. All of these factors are once again reshaping the landscape of security management, so we’ll kick off the series discussing these factors to set the stage for re-evaluating the security management platform. Evolution of SIEM Platform (and Technology): Next we’ll discuss the evolutionary changes in SIEM – from the standpoint of platform capabilities. It’s still all about more data and more data. We’ll cover architectural evolution, integration and ongoing care and feeding of the environment to meet the scaling requirements. We will also discuss how SIEM is increasingly leveraging other data sources, such as virtual servers, mobile events, big data analytics, threat feeds, as well as human and machine generated data. But all of this data does nothing if you don’t have the capabilities to do something with it, so we will discuss new analysis techniques and updates to older approaches that yield better results faster. To do more with more means, under the covers, scale and performance are being achieved via virtualizing lower cost commodity hardware, leveraging new data storage and data management architectures. SIEM remains the aggregation point for operations and security data, but the demands on the platform to ‘do more with more data’ is pushing the technical definition of SIEM forward and spawning necessary hybrid models to meet the requirements. Revisiting Your Requirements: Given the evolution of both the technology and the attacks, it’s time to revisit your specific requirements and

Share:
Read Post

Friday Summary: December 20, 2013 year end edition

I have not done a Friday Summary in a couple weeks, which is a blog post we have rarely missed over the last 6 years, so bad on me for being a flake. Sorry about that, but that does not mean I don’t have a few things I to talk about before years end. Noise. Lots of Bitcoin noise in the press, but little substance. Blogs like Forbes are speculating on Bitcoin investment potential, currency manipulation, and hoarding, tying in a celebrity whenever possible. Governments around the globe leverage the Gattaca extension of Godwin’s Law, when they say “YOU ARE EITHER WITH US OR IN FAVOR OF ILLEGAL DRUGS AND CHILD PORNOGRAPHY” – basing their arguments on unreasoning fear. This was the card played by the FBI and DHS this week, when they painted Bitcoin as a haven for money-launderers and child pornographers. But new and disruptive technologies always cause problems – in this case it is fundamentally disruptive for governments and fiat currencies. Governments want to tax it, track it, control exchange rates, and lots of other stuff in their own interest. And unless they can do that they will label it evil. But lost in the noise are the simple questions like “What is Bitcoin?” and “How does it work?” These are very important, and Bitcoin is the first virtual currency with a real shot at being a legitimate platform, so I want to delve into them today. Bitcoin is a virtual currency system, as you probably already knew. The key challenges of digital currency systems are not assigning uniqueness in the digital domain – where we can create an infinite number of digital copies – nor assignment of ownership of digital property, but instead stopping fraud and counterfeiting. This is conceptually no different than traditional currency systems, but the implementation is of course totally different. When I started writing this post a couple weeks ago, I ran across a blog from Michael Nielsen that did a better job of explaining how the Bitcoin system works than my own, so I will just point you there. Michael covers the basic components of any digital currency system, which are simple applications of public-key cryptography and digital signatures/hashes, along with the validation processes that deter fraud and keep the system working. Don’t be scared off by the word ‘cryptography’ – Michael uses understandable prose – so grab yourself a cup of coffee and give yourself a half hour to run through it. It’s worth your time to understand how the system is set up because you may be using it – or a variant of it – at some point in the future. But ultimately what I find most unique about Bitcoin is that the community validates transactions, unlike most other systems which use a central bank or designated escrow authorities to approve money transfers. This avoids a single government or entity taking control. And personally having built a system for virtual currency way back when, before the market was ready for such things, I always root for projects like Bitcoin. Independent and anonymous currency systems are a wonderful thing for the average person; in this day and age where we use virtual environments – think video games and social media – virtual currency systems provide application developers an easy abstraction for money. And that’s a big deal when you’re not ready to tackle money or exchanges or ownership when building an application. When you build a virtual system it should be the game or the social interaction that count. Being able to buy and trade in the context of an application, without having a Visa logo in your face or dealing with someone trying to regulate – or even tax – the hours spent playing, is a genuine consumer benefit. And it allows any arbitrary currency to be created, which can be tuned to the digital experience you are trying to create. More reading if you are interested: Bitcoin, not NFC, is the future of payments, and Mastercoin (Thanks Roy!). Ironically, this Tuesday I wrote an Incite on the idiocy of PoS security and the lack of Point to Point encryption, just before the breach at Target stores which Brian Krebs blogged about. If merchants don’t use P2P encryption, from card swipe to payment clearing, they must rely on ‘endpoint’ security of the Point of Sale terminals. Actually, in a woulda/coulda/shoulda sense, there are many strategies Target could have adopted. For the sake of argument let’s assume a merchant wants to secure their existing PoS and card swipe systems – which is a bit harder than securing desktop computers in an enterprise, and that is already a losing battle. The good news is that both the merchant and the card brands know exactly which cards have been used – this means both that they know the scope of their risk and that they can ratchet up fraud analytics on these specific cards. Or even better, cancel and reissue. But that’s where the bad news comes in: No way will the card brands cancel credit cards during the holiday season – it would be a PR nightmare if holiday shoppers couldn’t buy stuff for friends and families. Besides, the card brands don’t want pissed-off customers because a merchant got hacked – this should be the merchant’s problem, not theirs. I think this is David Rice’s point in Geekonomics: that people won’t act against their own short term best-interests, even if that hurts them in the long run. Of course the attackers know this, which is exactly why they do this during the holiday season: many transactions that don’t fit normal card usage profiles make fraud harder to detect, and their stolen cards are less likely to be canceled en masse. Consumers get collateral poop-spray from the hacked merchant, so it’s prudent for you to look for and dispute any charges you did not make. And, since the card brands have tried to tie debit and credit cards together, there are

Share:
Read Post

Datacard Acquires Entrust

Datacard Group, a firm that produces smart card printers and associated products, has announced its acquisition of Entrust. For those of you who are not familiar with Entrust, they were front and center in the PKI movement in the 1990s. Back then the idea was to issue a public/private key pair to uniquely identify every person and device in the universe. Ultimately that failed to scale and became unmanageable, with many firms complaining “I just spent millions of dollars so I can send encrypted email to the guy sitting next to me.” So for you old-time security people out there saying to yourself “Hey, wait, isn’t PKI dead?”, the answer is “Yeah, kinda.” Still others are saying “I thought Entrust was already acquired?”, to which the answer is “Yes”, by investment firm/holding company Thoma Bravo in 2009. Entrust, just like all the other surviving PKI vendors, has taken its core technologies and fashioned them into other security products and services. In fact, if you believe the financial numbers in the press releases under Thoma Bravo, Entrust has been steadily growing. Still, for most of you, a smart card hardware vendor buying a PKI vendor makes no sense. But in terms of where the smart card market is heading in response to disruptive mobile and cloud computing technologies the acquisition makes sense. Here are some major points to consider: What does this mean for Datacard? One Stop Shop: The smart card market is an interesting case of ‘coopetition’, as each major vendor in the field ends up partnering on some customer deals, then competing head to head on others. “Cobbling together solutions” probably sounds overly critical, but the fact is that most card solutions are pieced together from different providers’ hardware, software, and services. Customer requirements for specific processes, card customization, adjudication requirements, and specific regional requirements tend to force smart card producers tend to partner in order to fill in the gaps. By pulling in a couple key pieces from Entrust – specifically around certificate production, cloud, and PKI services – DCG comes very close to an end-to-end solution. When I read the press release from Datacard this morning, they used an almost a meaningless marketing phrase “reduce complexity while strengthening trust.” I think they mean that a single vendor means less moving parts and fewer providers to worry about. That’s possible, provided Datacard can stitch these pieces together so the customer (or service provider) does not need to. EMV Hedge: If you read this blog on a regular basis, you will have noticed that every month I say EMV is not happening in the US – at least not the way card brands envision it. While I hate to bet against Visa’s ability to force change in the payment space, consumers really don’t see the wisdom in carrying around more credit cards for shopping from their computer or mobile device. Those of you who no longer print out airline boarding passes understand carrying one object For all these simple day-to-day tasks. Entrust’s infrastructure for mobile certificates gives Datacard the potential to offer either a physical card or mobile platform solution for identity and payment. Should the market shift away from physical cards for payment or personal identification, they will be ready to react accordingly. Dipping a Toe into the Cloud: Smart card production technology is decidedly old school. Dropping a Windows-based PC on-site to do user registration and adjudication seems so 1999, but this remains the dominant model for drivers’ licenses, access cards, passports, national ID, and so on. Cloud services are a genuine advance, and offer many advantages for scale, data management, software management, and linking all the phases of card production together. While Entrust does not appear to be on the cutting edge of cloud services, they certainly have infrastructure and experience which Datacard lacks. From this standpoint, the acquisition is a major step in the right direction, toward a managed service/cloud offering for smart card services. Honestly I am surprised we haven’t seen more competitors do this yet, and expect them to buy or build the comparable offerings over time. What does this mean for Entrust Customers? Is PKI Dead or Not? We have heard infamous analyst quotes to the effect that “PKI is dead.” The problem is PKI that infrastructure is often erroneously confused with PKI technologies. Most enterprises who jumped on the PKI infrastructure bandwagon in the 1990s soon realized that identity approach was unmanageable and unscalable. That said, the underlying technologies of public key cryptography and X.509 certificates are not just alive and well, but critical for network security. And getting this technology right is not a simple endeavor. These tools are use in every national ID, passport, and “High Assurance” identity card, so getting them right is critical. This is likely Datacard’s motivation for the acquisition, and it makes sense for them to leverage this technology across their all their customer engagements, so existing Entrust PKI customers should not need to worry about product atrophy. SSL: SSL certificates are more prevalent now than ever because most enterprises, regardless of market, want secure network communications. Or at least they are compelled by some compliance mandate to secure network communications to ensure privacy and message integrity. For web and mobile services this means buying SSL certificates, a market which has been steadily growing for the last 5 years. While Entrust is not dominant in this field, they are one of the first and more trusted providers. That does not mean this acquisition is without risks. Can Datacard run an SSL business? SSL certificate business is fickle, and there is little friction when switching from one vendor to another. We have been hearing complaints about one of the major vendors in this field having aggressive sales tactics and poor service, resulting in several small enterprises switching certificate vendors. There are also risks for a hardware company digesting a software business, with inevitable cultural and technical issues. And there are genuine threats to any certificate authority

Share:
Read Post

Friday Summary: November 15, 2013

There is lots I want to talk about this week, so I decided to resort to some three-dot blogging. A few years ago at the security bloggers meet-up, Jeremiah Grossman, Rich Mogull and Robert Hansen were talking about browser security. After I rudely butted into the conversation they asked me if “the market” would be interested in a secure browser, one that was not compromised to allow marketing and advertising concerns to trump security. I felt no one would pay for it but the security community and financial services types would certainly be interested in such a browser. So I was totally jazzed when WhiteHat finally announced Aviator a couple weeks back. And work being what is has been, I finally got a chance to download it today and use it for a few hours. So far I miss nothing from Firefox, Safari, or Chrome. It’s fast, navigation is straightforward, it easily imported all my Firefox settings, and preferences are simple – somewhat the opposite of Chrome, IMO. And I like being able to switch users as I switch between different ISPs/locations (i.e., tunnels to different cloud providers ). I am not giving up my dedicated Fluid browsers dedicated to specific sites, but Fluid has been breaking for unknown reasons on some sites. But the Aviator and Little Snitch combinations is pretty powerful for filtering and blocking outbound traffic. I recommend WhiteHat’s post on key differences between Aviator and Chrome. If you are looking for a browser that does not hemorrhage personal information to any and every website, download a copy of Aviator and try it out. * * * I also want to comment on the MongoHQ breach a couple weeks back. Typically, it was discovered by one of their tenant clients: Buffer. Now that some of the hype has died away a couple facets of the breach should be clarified. First, MongoHQ is a Platform-as-a-Service (PaaS) provider, running on top of Amazon AWS, and specializing in in-memory Mongo databases. But it is important that this is a breach of a small cloud service provider, rather than a database hack, as the press has incorrectly portrayed it. Second, many people assume that access tokens are inherently secure. They are not. Certain types of identity tokens, if stolen, can be used to impersonate you. Third, the real root cause was a customer support application that provided MongoHQ personnel “an ‘impersonate’ feature that enables MongoHQ employees to access our primary web UI as if they were a logged in customer”. Yeah, that is as bad as it sounds, and not a feature you want accessible from just any external location. While the CEO stated “If access tokens were encrypted (which they are now) then this would have been avoided”, that’s just one way to prevent this issue. Amazon provides pretty good security recommendations, and this sort of attack is not possible if management applications are locked down with good security zone settings and restricted to AWS certificates for administrative access. Again, this is not a “big data hack” – it is a cloud service provider who was sloppy with their deployment. * * * It has been a strange year – I am normally “Totally Transparent” about what I am working on, but this year has involved several projects I can’t talk about. Now that things have cleared up, I am moving back to a normal research schedule, and I have a heck of a lot to talk about. I expect that during the next couple weeks I will begin work on: Risk-based Authentication: Simple questions like “who are you” and “what can you do” are no longer simple binary answers in this age of mobile computing. The answers are subjective and tinged with shades of gray. Businesses need to make access control decisions based on simple control lists, but simple lists are no longer adequate – they need to consider risk and behavior when making these decisions. Gunnar and I will explore this trend, and talk about the different techniques in use and the value they can realistically provide. Securing Big Data 2.0: The market has changed significantly over the past 14 months – since I last wrote about how to secure big data clusters – I will refresh that research, add sections on identity management, and take a closer look at application layer security – where a number of the known threats and issues persist. Two-factor Authentication: It is often discussed as the ultimate in security: a second authentication factor to make doubly sure you are who you claim to be. Many vendors are talking about it, both for and against, because of the hype. Our executive summary will look at usage, threats it can help address, and integration into existing systems. Understanding Mobile Identity Management: This will be a big one. A full-on research project in mobile identity management. We will publish a full outline in the coming weeks. Security Analytics with Big Data: I will release a series of targeted summaries of how big data works for security analytics, and how to start a security analytics program. If you have questions on any of these, or if there are other topics you thing we should be covering, shoot us an email. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian quoted on Trustwave’s acquisition of Application Security. Favorite Securosis Posts Mike Rothman: How to Detect Cloudwashing by Your Vendors. – Love how Adrian and Gunnar put a pin in the marketing hyperbole around cloud now. And brace yourself – we will see a lot more over the next year. Adrian Lane: The CISO’s Guide to Cloud: How Cloud is Different for Security. This is good old-fashioned Securosis research. Focused. A bit ahead of the curve. Pragmatic. Enjoying this series. Other Securosis Posts Incite 11/13/2013: Bully. New Series: What CISOs Need to Know about Cloud Computing. How to Edit Our Research on GitHub. Trustwave Acquires Application Security Inc. Security Awareness Training Evolution [New Paper]. Blowing Your

Share:
Read Post

How to Detect Cloudwashing by Your Vendors

“There is nothing more deceptive than an obvious fact” – Sherlock Holmes It’s cloud. It’s cloud-ready. It’s cloud insert-name-here. As analysts we have been running into a lot of vendors labeling traditional products as ‘cloud’. Two years ago we expected the practice to die out once customers understood cloud services. We were wrong – vendors are still doing it rather than actually building the technology. Call it cloudwashing, cloudification, or just plain BS. As an enterprise buyer, how can you tell whether the system you are thinking of purchasing is a cloud application or not? It should be easy – just look at the products branded ‘cloud’, right? But dig deeper and you see it’s not so simple. Sherlock Holmes made a science of detection, and being an enterprise buyer today can feel like a being detective in a complex investigation. Vendors have anticipated your questions and have answers ready. What to do? Start by drilling down: what is behind the labels? Is it cloud or just good old enterprise bloatware? Or is it MSO with a thin veneer of cloud? We pause here to state that there is nothing inherently wrong with enterprise software or MSO. There is also no reason cloud is necessarily better for you. Our goal here is to orient your thinking beyond labels and give you some tips so you can an educated consumer. We have seen a grab bag of cloudwashes. We offer the following questions to help you figure out what’s real: Does it run at a third party provider? (not on-premise or ‘private’ cloud) Is the service self-service (i.e., you can use it without other user interactions or without downloading – not installed ‘on the edge’ of your IT network) Is service metered? If you stopped using it tomorrow would bills stop? Can you buy it with a credit card? Can your brother-in-law sign up with the same service? Do you need to buy a software license? Does it have an API? Does it autoscale? Did the vendor start from scratch or rewrite its product? Is the product standalone (i.e., not a proxy-type interface on top of an existing stack)? Can you deploy without assistance; or does it require professional services to design, deploy, and operate? The more of these questions that get a ‘No’ answer, the more likely your vendor is marketing ‘cloud’ instead of selling cloud services. Why does that matter? Because real cloud environments offer specific advantages in elasticity, flexibility, scalability, self-service, pay-as-you-go, and various other areas, which are not present in many non-cloud solutions. What cloudwashing exercises have you seen? Please share in the comments below. Share:

Share:
Read Post

Trustwave Acquires Application Security Inc.

It has been a while since we had an acquisition in the database security space, but today Trustwave announced it acquired Application Security Inc. – commonly called “AppSec” by many who know the company. About 10 years ago I wrote my first competitive analysis paper during my employment with IPLocks, of our principal competitor: another little-known database security company called Application Security, Inc. Every quarter for four years, I updated those competitive analysis sheets to keep pace with AppSec’s product enhancements and competitive tactics in sales engagements. Little did I know I would continue to examine AppSec’s capabilities on a quarterly basis after having joined Securosis – but rather than solely looking at competitive positioning, I have been gearing my analysis toward how features map to the customer inquires, and tracking consumer experiences during proof-of-concept engagements. Of all the products I have tracked, I have been following AppSec the longest. It feels odd to be writing this for a general audience, but this deal is pretty straightforward, and it needed to happen. Application Security was one of the first database security vendors, and while they were considered a leader in the 2004 timeframe, their products have not been competitive for several years. AppSec still has one of the best database assessment products on the market (dbProtectAppDetectivePRO), and one of the better – possibly the best – database vulnerability research team backing it. But Database Activity Monitoring (DAM) is now the key driver in that space, and AppSec’s DAM product (AppDetectivePROdbProtect) has not kept pace with customer demand in terms of performance, integration, ease-of-use, or out-of-the-box functionality. A “blinders on” focus can be both admirable and necessary for very small start-ups to deliver innovative technologies to markets that don’t understand their new technology or value proposition, but as markets mature vendors must respond to customers and competitors. In AppSec’s early days, very few people understood why database security was important. But while the rest of the industry matured and worked to build enterprise-worthy solutions, AppSec turned a deaf ear to criticism from would-be customers and analysts. Today the platform has reasonable quality, but is not much more than an ‘also-ran’ in a very competitive field. That said, I think this is a very good purchase for Trustwave. It means several things for Trustwave customers: Trustwave has filled a compliance gap in its product portfolio – specifically for PCI. Trustwave is focused on PCI-DSS, and data and database security are central to PCI compliance. Web and network security have been part of their product suite, but database security has not. Keep in mind that DAM and assessment are not specifically prescribed for PCI compliance like WAF is; but the vast majority of customers I speak with use DAM to audit activity, discovery to show what data stores are being used, and assessment to prove that security controls are in place. Trustwave should have acquired this technology a while ago. The acquisition fits Trustwave’s model of buying decent technology companies at low prices, then selling a subset of their technology to existing customers where they already know demand exists. That could explain why they waited so long – balancing customer requirements against their ability to negotiate a bargain price. Trustwave knows what their customers need to pass PCI better than anyone else, so they will succeed with this technology in ways AppSec never could. This puts Trustwave on a more even footing for customers who care more about security and don’t just need to check a compliance box, and gives Trustwave a partial response to Imperva’s monitoring and WAF capabilities. I think Trustwave is correct that AppSec’s platform can help with their managed services offering – Monitoring and Assessment as a Service appeals to smaller enterprises and mid-market firms who don’t want to own or manage database security platforms. What does this mean for AppSec customers? It is difficult to say – I have not spoken with anyone from Trustwave about this acquisition, and I am unable to judge their commitment to putting engineering effort behind the AppSec products. And I cannot tell whether they intend to keep the research team which has been keeping the assessment component current. Trustwave tweeted during the official announcement that “.@Trustwave will continue to develop and support @AppSecInc products, DbProtect and AppDetectivePRO”, but that could be limited to features compliance buyers demand, without closing the performance and data collection gaps that are problematic for DAM customers. I will blog more on this as I get more information, but expect them to provide what’s required to meet compliance and no more. And lastly, for those keeping score at home, AppSec is the 7th Database Activity Monitoring acquisition – after Lumigent (BeyondTrust), IPLocks (Fortinet), Guardium (IBM), Secerno (Oracle), Tizor (IBM via Netezza), and Sentrigo (McAfee); leaving Imperva and GreenSQL as the last independent DAM vendors. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.