Securosis

Research

Advanced Endpoint and Server Protection [New Series]

Endpoint protection has become the punching bag of security. Every successful attack seems to be blamed on a failure of endpoint protection. Not that this is totally unjustified – most solutions for endpoint protection have failed to keep pace with attackers. In our 2014 Endpoint Security Buyers Guide, we discussed many of the issues around endpoint hygiene and mobility. We also explored the human element underlying many of attacks, and how to prepare your employees for social engineering attacks in Security Awareness Training Evolution. But realistically, hygiene and awareness won’t deter an advanced attacker long. We frequently say advanced attackers are only advanced as they need to be – they take the path of least resistance. But the converse is also true. When this class of adversaries needs advanced techniques they use them. Traditional malware defenses such as antivirus don’t stand much chance against a zero-day attack. So our new series, Advanced Endpoint and Server Protection, will dig into protecting devices against advanced attackers. We will highlight a number of new alternatives for preventing and detecting advanced malware, and examine new techniques and tools to investigate attacks and search for indicators of compromise within your environment. But first let’s provide some context for what has been happening with traditional endpoint protection because you need to understand the current state of AV technology for perspective on how these advanced alternatives help. AV Evolution Signature-based AV no longer works. Everyone has known that for years. It is not just because blocking a file you know is bad isn’t enough any more. But there are simply too many bad files to, and new ones crop up too quickly, for it to be possible to compare every file against a list of bad files. The signature-based AV algorithm still works as well as it ever did, but it is no longer even remotely adequate. Nor is it comprehensive enough to catch the varying types of attacks in the wild today. So the industry adapted, focusing on broadening the suite of endpoint protection technologies to include host intrusion prevention, which blocks known-bad actions at the kernel level. The industry also started sharing information across its broad customer base to identify IP addresses known to do bad things, and files which contain embedded malware. That shared information is known as threat intelligence, and can help you learn from attacks targeting other organizations. Endpoint security providers also keep adding modules to their increasingly broad and heavy endpoint protection suites. Things like server host intrusion prevention, patch/configuration management, and even full application white listing – all attempting to ensure no unauthorized executables run on protected devices. To be fair, the big AV vendors have not been standing still. They are adapting and working to broaden their protection to keep pace with attackers. But even with all their tools packaged together, it cannot be enough. It’s software and it will never be perfect or defect-free. Their tools will always be vulnerable and under attack. We need to rethink how we do threat management as an industry, in light of these attacks and the cold hard reality that not all of them can be stopped. We have been thinking about what the threat management process will come to look like. We presented some ideas in the CISO’s Guide to Advanced Attackers, but that was focused on what needs to happen to respond to an advanced attack. Now we want to document a broader threat management process, which we will refine through 2014. Threat Management Reimagined Threat management is a hard concept to get your arms around. Where does it start? Where does it end? Isn’t threat management really just another way of describing security? Those are hard questions without absolute answers. For the purposes of this research, threat management is about dealing with an attack. It’s not about compliance, even though most mandates are responses to attacks that happened 5 years ago. It’s not really about hygiene – keeping your devices properly configured and patched is good operational practices, not tied to a specific attack. It’s not about finding resources to actually execute on these plans, nor is it an issue of communicating the value of the security team. Those are all responsibilities of the broader security program. Threat management is a subset of the larger security program – typically the most highly visible capability. So let’s explain how we think about threat management (for the moment, anyway) and let you pick it apart. Assessment: You cannot protect what you don’t know about – that hasn’t changed. So the first step is gaining visibility into all devices, data sources, and applications that present risk to your environment. And you need to understand the current security posture of anything to protect. Prevention: Next you try to stop an attack from being successful. This is where most of the effort in security has been for the past decade, with mixed (okay, lousy) results. A number of new tactics and techniques are modestly increasing effectiveness, but the simple fact is that you cannot prevent every attack. It has become a question of reducing your attack surface as much as practical. If you can stop the simplistic attacks, you can focus on the more advanced ones. Detection: You cannot prevent every attack, so you need a way to detect attacks after they get through your defenses. There are a number of different options for detection – most based on watching for patterns that indicate a compromised device. The key is to shorten the time between when the device is compromised and when you discover it has been compromised. Investigation: Once you detect an attack you need to verify the compromise and understand what it actually did. This typically involves a formal investigation, including a structured process to gather forensic data from devices, triage to determine the root cause of the attack, and searching to determine how broadly the attack has spread within your environment. Remediation: Once you understand what happened you can put a

Share:
Read Post

New Paper: Defending Against Denial of Service Attacks

Just in case you had nothing else to do during the holiday season, you can check out our latest research on Application Denial of Service Attacks. This paper continues our research into Denial of Service attacks after last year’s Defending Against Denial of Service Attacks research. As we stated back then, DoS encompasses a number of different tactics, all aimed at impacting the availability of your applications or infrastructure. In this paper we dig much deeper into application DoS attacks. For good reason – as the paper says: These attacks require knowledge of the application and how to break or game it. They can be far more efficient than just blasting traffic at a network, requiring far fewer attack nodes and less bandwidth. A side benefit of requiring fewer nodes is simplified command and control, allowing more agility in adding new application attacks. Moreover, the attackers often take advantage of legitimate application features, making defense considerably harder. We expect a continued focus on application DoS attacks over time, so we offer both an overview of the common types of attacks you will see and possible mitigations for each one. After reading this paper you should have a clear understanding of how your application availability will be attacked – and more importantly, what you can do about it. We would like to thank our friends at Akamai for licensing this content. Without the support of our clients our open research model wouldn’t be possible. To sum up, here are some thoughts on defense: Defending against AppDoS requires a multi-faceted approach that typically starts with a mechanism to filter attack traffic, either via a web protection service running in the cloud or an on-premise anti-DoS device. The next layer of defense includes operational measures to ensure the application stack is hardened, including timely patching and secure configuration of components. Finally, developers must play their part by optimizing database queries and providing sufficient input validation to make sure the application itself cannot be overwhelmed using legitimate capabilities. Keeping applications up and running requires significant collaboration between development, operations, and security. This ensures not only that sufficient defenses are in place, but also that a well-orchestrated response maintains and/or restores service as quickly as possible. It is not a matter of if but when you are targeted by an Application Denial of Service attack. Check out the landing page for the paper, or you can download the Defending Against Application Denial of Service Attacks PDF directly. Share:

Share:
Read Post

Security Management 2.5: Replacing Your SIEM Yet? [New Series]

Security Information and Event Management (SIEM) systems create a lot of controversy with security folks; they are one of the cornerstones on which the security program are built upon within every enterprise. Yet, simultaneously SIEM generates the most complaints and general angst. Two years ago Mike and I completed a research project on “SIEM 2.0: Time to Replace your SIEM?” based upon a series of conversations with organizations who wanted more from their investment. Specifically they wanted more scalability, easier deployment, and the ability to ‘monitor up the stack’ in context of business applications and better integration with enterprise systems (like identity). Over the past two years the pace of customer demands and platform evolution to meet those demands has accelerated. What we thought was the tail end of a trend with second-generation SIEMs improving scalability using purpose-built data stores turned out to be the tip of the iceberg. As enterprises wanted to analyze more types of data, from more sources, with more – re: better – analysis capabilities to derive better information to keep pace with advanced attackers. Despite solid platform upgrades from a number of SIEM vendors, these requirements have blossomed faster than their vendor could respond. And sadly, some security vendors marketed “advanced capabilities” when it was really the same old pig in a new suit, causing further chagrin and disappointment amongst their customers. Whatever the reason, here we are two years later, listening to the same tale from customers looking to replace their SIEM (again) given these new requirements. You may feel like Bill Murray in Groundhog Day, reliving the past over and over again, but this time is different. The requirements have changed! Actually they have. The original architects of the early SIEM platforms could not have envisioned the kind of analysis required to detect attacks designed to evade SIEM tools. The attackers are thinking differently, and that means the defenders that want to keep pace need to rip up their old playbook and very likely critically evaluate their old tools as well. Malware is now the major driver, but since you can’t really detect advanced attacks anymore based on a file signature, you have to mine data for security information in a whole new way. Cloud computing and mobile devices are disrupting the technology infrastructure. And the collection and analysis of these and many other data streams (like network packet capture) are bursting the seams of SIEM. It doesn’t just stop at security alerting either. Other organizations, from IT operations to risk to business analytics, also want to mine the security information collected looking for new ways to streamline operations, maintain availability, and optimize the environment. Moving forward, you’ll need to heavily leverage your investments in security monitoring and analysis technologies. If that resource can’t be leveraged, enterprises will move on and find something more in line with their requirements. Given the rapid evolution we’ve seen in SIEM/Log Management over the past 4-5 years, product obsolescence is a genuine issue. The negative impact of a product that has not kept pace with technical evolution and customer requirements cannot be trivialized. This pain becomes more acute in the event of a missed security incident because the SIEM did not collect the requisite information, or worse, could not detect the threat. Customers spend significant resources (both time and money) on the care and feeding of their SIEM. If they don’t feel the value is in alignment with the investment, again they’ll move on and search for better, easier, and faster products. It’s realistic, if not expected, that these customers start questioning whether the incumbent offering makes sense for their organization moving forward. Additionally, firms are increasingly considering managed services and 3rd party security operations providers to address skills and resource shortages within internal groups. Firms simply don’t have the internal expertise to look for advanced threats. This skills gap also promises to reshape the landscape of security management, so we’ll kick off the series discussing these factors, setting the stage to update our guide to selecting a SIEM. Specifically, we will cover the following topics: The Changing Needs of Security Management: As firms branch into cloud environments and offer mobile applications to their employees and customers, the definition of ‘system’ now encompasses use cases outside what’s long been considered the corporate perimeter, changing the view of “infrastructure” that needs to be monitored. Simultaneously, advanced malware attacks now requires more types of data, threat intelligence and polices to adequately detect these attacks. Additionally, firms are increasingly considering managed services and 3rd party security operations to address skills and resource shortages within internal groups. All of these factors are once again reshaping the landscape of security management, so we’ll kick off the series discussing these factors to set the stage for re-evaluating the security management platform. Evolution of SIEM Platform (and Technology): Next we’ll discuss the evolutionary changes in SIEM – from the standpoint of platform capabilities. It’s still all about more data and more data. We’ll cover architectural evolution, integration and ongoing care and feeding of the environment to meet the scaling requirements. We will also discuss how SIEM is increasingly leveraging other data sources, such as virtual servers, mobile events, big data analytics, threat feeds, as well as human and machine generated data. But all of this data does nothing if you don’t have the capabilities to do something with it, so we will discuss new analysis techniques and updates to older approaches that yield better results faster. To do more with more means, under the covers, scale and performance are being achieved via virtualizing lower cost commodity hardware, leveraging new data storage and data management architectures. SIEM remains the aggregation point for operations and security data, but the demands on the platform to ‘do more with more data’ is pushing the technical definition of SIEM forward and spawning necessary hybrid models to meet the requirements. Revisiting Your Requirements: Given the evolution of both the technology and the attacks, it’s time to revisit your specific requirements and

Share:
Read Post

Security Assurance & Testing: Quick Wins

We started this Security Assurance and Testing (SA&T) series with the need for testing and which tactics make sense within an SA&T program. But it is always helpful to see how the concepts apply to more tangible situations. So we will now show how the SA&T program can provide a quick win for the security team, with two (admittedly contrived) scenarios that show how SA&T can be used – both at the front end of a project, and on an ongoing basis, to ensure the organization is well aware of its security posture. Infrastructure Upgrade For this first scenario let’s consider an organization’s move to a private cloud environment to support a critical application. This is a common situation these days. The business driver is better utilization of data center resources and more agility for deploying hardware resources to meet organizational needs. Obviously this is a major departure from the historical rack and provision approach. This is attractive to organizations because it enables better operational orchestration, allowing for new devices (‘instances’ in cloud land) to be spun up and taken down automatically according to the application’s scalability requirements. The private cloud architecture folks aren’t totally deaf to security, so some virtualized security tools are implemented to enforce network segmentation within the data center and block some attacks from insider threats. Without an SA&T program you would probably sign off on the architecture (which does provide some security) and move on to the next thing on your list. There wouldn’t be a way to figure out whether the environment is really secure until it went live, and then attackers will let you know quickly enough. Using SA&T techniques you can potentially identify issues at the beginning of implementation, saving everyone a bunch of heartburn. Let’s enumerate some of the tests to get a feel for what you may find: Infrastructure scalability: You can capture network traffic to the application, and then replay it to test scalability of the environment. After increasing traffic into the application, you might find that the cloud’s auto-scaling capability is inadequate. Or it might scale a bit too well, spinning up new instances too quickly, or failing to take down instances quickly enough. All these issues impact ability and value of the private cloud to the organization, and handling them properly can save a lot of heartburn for Ops. Security scalability: Another infrastructure aspect you can test is its security – especially virtualized security tools. By blasting the environment with a ton of traffic, you might discover your virtual security tools crumble rather than scaling – perhaps because VMs lack custom silicon – and fall over. This failure normally either “fails open”, allowing attacks, or “fails closed”, impacting availability. You may need to change your network architecture to expose your security tools only to the amount of traffic they can handle. Either way, better to identify a potential bottleneck before it impairs either availability or security. A quick win for sure. Security evasion: You can also test security tools to see how they deal with evasion. If the new tools don’t use the same policy as the perimeter, which has been tuned to effectively deal with evasion, the new virtual device may require substantial tuning to ensure security within the private cloud. Network hopping: Another feature of private clouds is their ability to define network traffic flows and segmentation – “Software Defined Networks”. But if the virtual network isn’t configured correctly, it is possible to jump across logical segments to access protected information. Vulnerability testing of new instances: One of the really cool (and disruptive) aspects of cloud computing is elimination of the need for changing/tuning configurations and patching. Just spin up a new instance, fully patched and configured correctly, move the workload over, and take down the old one. But if new instances spin up with vulnerabilities or poor configurations, auto-scaling is not your friend. Test new instances on an ongoing basis to ensure proper security. Again, a win if something was amiss. As you see, many things can go wrong with any kind of infrastructure upgrade. A strong process to find breaking points in the infrastructure before going live can mitigate much of the deployment risk – especially if you are dealing with new equipment. Given the dynamic nature of technology you will want to make sure you are testing the environment on an ongoing basis, as well ensuring that change doesn’t add unnecessary attack surface. This scenario points out where many issues can be found. What happens if you can’t find any issues? Does that impact the value of the SA&T program? Actually, if anything, it enhances its value – by providing peace of mind that the infrastructure is ready for production. New Application Capabilities To dig into another scenario, let’s move up the stack a bit to discuss how SA&T applies to adding new capabilities within an application serving a large user community, to enable commerce on a web site. Business folks like to sell stuff, so they like these kinds of new capabilities. This initiative involves providing access to a critical data store previously inaccessible directly from an Internet-facing application, which is an area of concern. The development team has run some scans against the application to identify application layer issues such as XSS, and fixed them before deployment by front-ending the application with a WAF. So a lot of the low-hanging fruit of application testing is gone. But that shouldn’t be the end of testing. Let’s look into some other areas which could uncover issues by focusing on realistic attack patterns and tactics: Attack the stack: You could use a slow HTTP attack to see if the application can defend against availability attacks on the stack. These attacks are very hard to detect at the network layer so you need to make sure the underlying stack is configured to deal with them. Shopping cart attack: Another type of availability attack uses the application’s legitimate functionality against it. It’s a bit like

Share:
Read Post

Security Assurance & Testing: Tactics and Programs

As we discussed in the introduction to this Security Assurance & Testing (SA&T) series, it is increasingly hard to adequately test infrastructure and applications before they go into production. But adversaries have the benefit of being able to target the weakest part of your environment – whatever it may be. So the key to SA&T is to ensure you are covering the entire stack. Does that make the process a lot more detailed and complex? Absolutely, but you can’t be sure what will happen when facing real attackers without a comprehensive test. To discuss tactics we will consider how you would test your network and then your applications. We will also discuss testing exfiltration because preventing critical data from leaving your environment disrupts the Data Breach Triangle. Testing Network Security Reading security trade publications you would get the impression that attackers only target applications nowadays, and don’t go after weaknesses in network or security equipment. Au contraire – attackers find the path of least resistance, whatever it is. So if you have a weak firewall ruleset or an easy-to-evade WAF and/or IPS, that’s where they go. Advanced attackers are only as advanced as they need to be. If they can get access to your network via the firewall, evade the IPS, and then move laterally by jumping across VLANs… they will. And they can leave those 0-days on the shelf until they really need them. So you need to test any device that sees the flow of data. That includes network switches, firewalls, IDS/IPS, web application firewalls, network-based malware detection gear, web filters, email security gateways, SSL VPN devices, etc. If it sees traffic it can be attacked, and it probably will be, and you need to be ready. So what should you actually test for network and security devices? Scalability: Spec sheets may be, uh, inaccurate. Even if there is a shred of truth in the spec sheet, it may not be applicable to your configuration or application traffic. Make sure the devices will stand up to real traffic at the peak traffic volumes you will see. Additionally, with increasingly common denial of service attacks, ensuring your infrastructure can withstand a volumetric attack is integral to maintaining availability of your infrastructure. Evasion: Similarly, if a network security device can be evaded, it doesn’t matter how scalable or effective it is at blocking attacks it catches. So you’ll want to ensure you are testing for standard evasion tactics. Reconfiguration: Finally, if the device can be reconfigured and unauthorized policy changes accepted, your finely-tuned and secure policy isn’t worth much. So make sure your devices cannot be accessed except by an authorized party using acceptable policy management. Application Layer Once you are confident the network and security devices will hold up, move on to testing the application layer. Here are the highlights: Profile inbound application traffic: You do this to understand your normal volumes, protocols, and destinations. Then you can build some scenarios that would represent not normal use of the application to test edge cases. Be sure to capture actual application traffic so you can hide the attack within it, the way real attackers will. Application protection: You will also want to stress test the WAF and other application protections using standard application attack techniques – including buffer overflows, application fuzzing, cross-site scripting, slow HTTP, and other denial of service tactics that target applications. Again, the idea is to identify the breaking points of the application before your adversaries do. The key aspect of this assurance and testing process is to make sure you test as much of the application as you can. Similar to a Quality Assurance testing harness used by developers, you want to exercise as much of the code as you can to ensure it will hold up. Keep in mind that adversaries usually have time, so they will search every nook and cranny of your application to find its weak spot. Thus the need for comprehensive testing. Exfiltration The last aspect of your SA&T process is to see if you can actually get data out. Unless the data can be exfiltrated, it’s not really a breach, per se. Here you want to test your content filtering capabilities, including DLP, web filters, email security, and any other security controls that inspect content on the way out. Similar to the full code coverage approach discussed above, you want to make sure you are trying to exfiltrate through as many applications and protocols as possible. That means all the major social networks (and some not-so-major ones) and other logical applications like webmail. Also ensure the traffic is sent out encrypted, because attackers increasingly use multiple layers of encryption to exfiltrate data. Finally, test the feasibility of establishing connections with command and control (C&C) networks. These ‘callbacks’ identify compromised devices, so you will want to make sure you can detect this traffic before data is exfiltrated. This can involve sending traffic to known bad C&C nodes, as well as using traffic patterns that indicate domain generating algorithms and other automated means of finding bot controllers. The SA&T Program Just as we made the case for continuous security monitoring to provide a way to understand how ongoing (and never-ending) changes in your environment must be monitored to assess their impact on security posture, you need to think about SA&T from an ongoing rather than opposed to one-time perspective. In order to really understand how effective your controls will be, you need to implement an SA&T program. Frequency The first set of decisions for establishing your program concerns testing frequency. The underlying network/security equipment and computing infrastructure tends not to change that often so you likely can get away with testing these components less frequently – perhaps quarterly. But if your environment has constant infrastructure changes, or you don’t control your infrastructure (outsourced data center, etc.) you may want to test more often. Another aspect of testing frequency is planning for ad hoc tests. These involve defining a set of catalysts that

Share:
Read Post

Friday Summary: December 20, 2013 year end edition

I have not done a Friday Summary in a couple weeks, which is a blog post we have rarely missed over the last 6 years, so bad on me for being a flake. Sorry about that, but that does not mean I don’t have a few things I to talk about before years end. Noise. Lots of Bitcoin noise in the press, but little substance. Blogs like Forbes are speculating on Bitcoin investment potential, currency manipulation, and hoarding, tying in a celebrity whenever possible. Governments around the globe leverage the Gattaca extension of Godwin’s Law, when they say “YOU ARE EITHER WITH US OR IN FAVOR OF ILLEGAL DRUGS AND CHILD PORNOGRAPHY” – basing their arguments on unreasoning fear. This was the card played by the FBI and DHS this week, when they painted Bitcoin as a haven for money-launderers and child pornographers. But new and disruptive technologies always cause problems – in this case it is fundamentally disruptive for governments and fiat currencies. Governments want to tax it, track it, control exchange rates, and lots of other stuff in their own interest. And unless they can do that they will label it evil. But lost in the noise are the simple questions like “What is Bitcoin?” and “How does it work?” These are very important, and Bitcoin is the first virtual currency with a real shot at being a legitimate platform, so I want to delve into them today. Bitcoin is a virtual currency system, as you probably already knew. The key challenges of digital currency systems are not assigning uniqueness in the digital domain – where we can create an infinite number of digital copies – nor assignment of ownership of digital property, but instead stopping fraud and counterfeiting. This is conceptually no different than traditional currency systems, but the implementation is of course totally different. When I started writing this post a couple weeks ago, I ran across a blog from Michael Nielsen that did a better job of explaining how the Bitcoin system works than my own, so I will just point you there. Michael covers the basic components of any digital currency system, which are simple applications of public-key cryptography and digital signatures/hashes, along with the validation processes that deter fraud and keep the system working. Don’t be scared off by the word ‘cryptography’ – Michael uses understandable prose – so grab yourself a cup of coffee and give yourself a half hour to run through it. It’s worth your time to understand how the system is set up because you may be using it – or a variant of it – at some point in the future. But ultimately what I find most unique about Bitcoin is that the community validates transactions, unlike most other systems which use a central bank or designated escrow authorities to approve money transfers. This avoids a single government or entity taking control. And personally having built a system for virtual currency way back when, before the market was ready for such things, I always root for projects like Bitcoin. Independent and anonymous currency systems are a wonderful thing for the average person; in this day and age where we use virtual environments – think video games and social media – virtual currency systems provide application developers an easy abstraction for money. And that’s a big deal when you’re not ready to tackle money or exchanges or ownership when building an application. When you build a virtual system it should be the game or the social interaction that count. Being able to buy and trade in the context of an application, without having a Visa logo in your face or dealing with someone trying to regulate – or even tax – the hours spent playing, is a genuine consumer benefit. And it allows any arbitrary currency to be created, which can be tuned to the digital experience you are trying to create. More reading if you are interested: Bitcoin, not NFC, is the future of payments, and Mastercoin (Thanks Roy!). Ironically, this Tuesday I wrote an Incite on the idiocy of PoS security and the lack of Point to Point encryption, just before the breach at Target stores which Brian Krebs blogged about. If merchants don’t use P2P encryption, from card swipe to payment clearing, they must rely on ‘endpoint’ security of the Point of Sale terminals. Actually, in a woulda/coulda/shoulda sense, there are many strategies Target could have adopted. For the sake of argument let’s assume a merchant wants to secure their existing PoS and card swipe systems – which is a bit harder than securing desktop computers in an enterprise, and that is already a losing battle. The good news is that both the merchant and the card brands know exactly which cards have been used – this means both that they know the scope of their risk and that they can ratchet up fraud analytics on these specific cards. Or even better, cancel and reissue. But that’s where the bad news comes in: No way will the card brands cancel credit cards during the holiday season – it would be a PR nightmare if holiday shoppers couldn’t buy stuff for friends and families. Besides, the card brands don’t want pissed-off customers because a merchant got hacked – this should be the merchant’s problem, not theirs. I think this is David Rice’s point in Geekonomics: that people won’t act against their own short term best-interests, even if that hurts them in the long run. Of course the attackers know this, which is exactly why they do this during the holiday season: many transactions that don’t fit normal card usage profiles make fraud harder to detect, and their stolen cards are less likely to be canceled en masse. Consumers get collateral poop-spray from the hacked merchant, so it’s prudent for you to look for and dispute any charges you did not make. And, since the card brands have tried to tie debit and credit cards together, there are

Share:
Read Post

Datacard Acquires Entrust

Datacard Group, a firm that produces smart card printers and associated products, has announced its acquisition of Entrust. For those of you who are not familiar with Entrust, they were front and center in the PKI movement in the 1990s. Back then the idea was to issue a public/private key pair to uniquely identify every person and device in the universe. Ultimately that failed to scale and became unmanageable, with many firms complaining “I just spent millions of dollars so I can send encrypted email to the guy sitting next to me.” So for you old-time security people out there saying to yourself “Hey, wait, isn’t PKI dead?”, the answer is “Yeah, kinda.” Still others are saying “I thought Entrust was already acquired?”, to which the answer is “Yes”, by investment firm/holding company Thoma Bravo in 2009. Entrust, just like all the other surviving PKI vendors, has taken its core technologies and fashioned them into other security products and services. In fact, if you believe the financial numbers in the press releases under Thoma Bravo, Entrust has been steadily growing. Still, for most of you, a smart card hardware vendor buying a PKI vendor makes no sense. But in terms of where the smart card market is heading in response to disruptive mobile and cloud computing technologies the acquisition makes sense. Here are some major points to consider: What does this mean for Datacard? One Stop Shop: The smart card market is an interesting case of ‘coopetition’, as each major vendor in the field ends up partnering on some customer deals, then competing head to head on others. “Cobbling together solutions” probably sounds overly critical, but the fact is that most card solutions are pieced together from different providers’ hardware, software, and services. Customer requirements for specific processes, card customization, adjudication requirements, and specific regional requirements tend to force smart card producers tend to partner in order to fill in the gaps. By pulling in a couple key pieces from Entrust – specifically around certificate production, cloud, and PKI services – DCG comes very close to an end-to-end solution. When I read the press release from Datacard this morning, they used an almost a meaningless marketing phrase “reduce complexity while strengthening trust.” I think they mean that a single vendor means less moving parts and fewer providers to worry about. That’s possible, provided Datacard can stitch these pieces together so the customer (or service provider) does not need to. EMV Hedge: If you read this blog on a regular basis, you will have noticed that every month I say EMV is not happening in the US – at least not the way card brands envision it. While I hate to bet against Visa’s ability to force change in the payment space, consumers really don’t see the wisdom in carrying around more credit cards for shopping from their computer or mobile device. Those of you who no longer print out airline boarding passes understand carrying one object For all these simple day-to-day tasks. Entrust’s infrastructure for mobile certificates gives Datacard the potential to offer either a physical card or mobile platform solution for identity and payment. Should the market shift away from physical cards for payment or personal identification, they will be ready to react accordingly. Dipping a Toe into the Cloud: Smart card production technology is decidedly old school. Dropping a Windows-based PC on-site to do user registration and adjudication seems so 1999, but this remains the dominant model for drivers’ licenses, access cards, passports, national ID, and so on. Cloud services are a genuine advance, and offer many advantages for scale, data management, software management, and linking all the phases of card production together. While Entrust does not appear to be on the cutting edge of cloud services, they certainly have infrastructure and experience which Datacard lacks. From this standpoint, the acquisition is a major step in the right direction, toward a managed service/cloud offering for smart card services. Honestly I am surprised we haven’t seen more competitors do this yet, and expect them to buy or build the comparable offerings over time. What does this mean for Entrust Customers? Is PKI Dead or Not? We have heard infamous analyst quotes to the effect that “PKI is dead.” The problem is PKI that infrastructure is often erroneously confused with PKI technologies. Most enterprises who jumped on the PKI infrastructure bandwagon in the 1990s soon realized that identity approach was unmanageable and unscalable. That said, the underlying technologies of public key cryptography and X.509 certificates are not just alive and well, but critical for network security. And getting this technology right is not a simple endeavor. These tools are use in every national ID, passport, and “High Assurance” identity card, so getting them right is critical. This is likely Datacard’s motivation for the acquisition, and it makes sense for them to leverage this technology across their all their customer engagements, so existing Entrust PKI customers should not need to worry about product atrophy. SSL: SSL certificates are more prevalent now than ever because most enterprises, regardless of market, want secure network communications. Or at least they are compelled by some compliance mandate to secure network communications to ensure privacy and message integrity. For web and mobile services this means buying SSL certificates, a market which has been steadily growing for the last 5 years. While Entrust is not dominant in this field, they are one of the first and more trusted providers. That does not mean this acquisition is without risks. Can Datacard run an SSL business? SSL certificate business is fickle, and there is little friction when switching from one vendor to another. We have been hearing complaints about one of the major vendors in this field having aggressive sales tactics and poor service, resulting in several small enterprises switching certificate vendors. There are also risks for a hardware company digesting a software business, with inevitable cultural and technical issues. And there are genuine threats to any certificate authority

Share:
Read Post

Incite 12/18/2013: Flow

As I sit down to write the last Incite of the year I cannot help but be retrospective. How will I remember 2013? It has been a year of ups and downs. Pretty much like every year. I set out to prove some hypotheses I had at the beginning of the year, and I did. I let some opportunities pass by and I didn’t execute on others. Pretty much like every year. I had low lows and very high highs. Pretty much like every year. I have gotten introspective over the second half of this year. And that’s been reflected in my weekly missives. It’s been a period of learning and evaluation for me. Of coming to grips with who I really am, what I like to do, and what I want to be in the next stage of my life. Of course there are no real answers to such existential questions, but it’s about learning to live in a way that is modest, sustainable, and kind. As I look back, the most important thing I have learned this year is to flow. I spent so many years fighting against myself, pushing to be in a place I wasn’t ready for, and to meet unrealistic expectations for achievement. It has been a process but I have let go of those expectations and made a concerted effort to Live Right Now. And that’s a great thing. The mental lever that flipped was actually a pretty simple analogy. It’s about being in the river. Sometimes the current is slow and you just float along. You are still moving, but at an easy pace. Those are the times to look around, enjoy the scenery, and catch your breath. Because inevitably somewhere further down river you’ll hit rapids. Things accelerate and you have no choice but to keep focused on what’s right in front of you. You have to hold on, avoid the rocks, and navigate safely through. Then you look up and things calm down. You have an opportunity at that point to maybe wash up on the shore and take a rest. Or go in a different direction. But trying to slow things down in the rapids doesn’t work very well. And trying to speed things up in a slow current doesn’t work any better. Appreciate the pace and flow with it. Simple, right? It’s like being in quicksand. You can’t fight against it or you’ll sink. It’s totally unnatural, but you have to just relax and trust that your natural buoyancy will keep you afloat in the denser sand. Resist and struggle and you’ll sink. Accept the situation, don’t react abruptly or unthinkingly, and you have a chance. Yup, a lot like life. So in 2013 I have learned about the importance of flowing with my life. Appreciate the slow times and prepare for the rapids. Like everything else, easy to say but challenging to do consistently. But life seems to give us plenty of opportunities to practice. At least mine does. Onward to 2014. From the Securosis clan to yours, have a happy holiday, and the Incite will return on January 8. –Mike Photo credit: “Flow” originally uploaded by Yogendra Joshi Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. What CISOs Need to Know about Cloud Computing Adapting Security for Cloud Computing How the Cloud is Different for Security Introduction Defending Against Application Denial of Service Building Protections In Abusing Application Logic Attacking the Application Stack Newly Published Papers Security Awareness Training Evolution Firewall Management Essentials Continuous Security Monitoring API Gateways Threat Intelligence for Ecosystem Risk Management Dealing with Database Denial of Service Identity and Access Management for Cloud Services The 2014 Endpoint Security Buyer’s Guide The CISO’s Guide to Advanced Attackers Incite 4 U The two sides of predictions: It’s entertaining when Martin McKeay gets all fired up about something. Here he rails against the year end prediction machine and advises folks to just say ‘no’ to their marketing teams when asked to provide these predictions. Like that’s an option. Tech pubs need fodder to post (to drive page views) and marketing folks need press hits to keep their VPs and CEOs happy. Accept it. But here’s the deal: security practitioners need to make predictions continuously. They predict whether their controls are sufficient given the attacks they expect. Whether the skills of their people will hold up under fire. Whether that new application will end up providing easy access for adversaries into the inner sanctum of the data center. It’s true that press friendly predictions have little accountability, but the predictions of practitioners have real ramifications, pretty much every day. So I agree with Martin that those year-end predictions are useless. But prediction is a key aspect of every business function, including security… – MR The Most Wonderful Time of the Year: This time of year it’s really easy for me to skim security news and articles. All I need to do is skip anything with the words ‘Prediction’ or ‘Top Tips’ in the title, and I can cull 95% of the holiday reading poop-hose. But for whatever reason I was slumming on Network World and saw Top Tips for Keeping Your Data Safe on The Cloud, an article directed at the mass market rather than not corporate users. Rather than mock, in my merry mood, I’ll go one better: I can summarize this advice into one simple actionable item. If you have sensitive data that you don’t want viewed when your cloud provider is hacked, encrypt it before you send it there. Simple. Effective. And now it’s time for me to make sure I have followed my own advice: Happy Holidays! – AL Sync and you could be sunk: Cool research on the Tripwire

Share:
Read Post

Incite 12/11/2013: Commuter Hell

I’m pretty lucky – my most recent memories of a long commute were back in 1988, when I worked in NYC during my engineering co-op in college. It was miserable. Car to bus to train, and then walk a couple blocks through midtown to the office. It made me old when I was young. I only did it for 6 months, and I can’t imagine the toll it takes on folks who do it every day for decades. Today you can be kind of productive while commuting, which is different than in the late 80s. There are podcasts and books on tape, and if you take aboveground public transportation, you can get network connectivity and bang through your email and social media updates before you even get to the office. But it still takes a toll. Time is not your own. You are at the mercy of traffic or mass transit system efficiency. I was recently traveling to see some clients (and doing some end-of-year strategy sessions), and the first day it took me over 90 minutes to go 35 miles. For some reason I actually left enough time to get there and didn’t screw up my day by being late for my first meeting. Getting back to my hotel for a dinner meeting took another hour. I was productive, making calls and the like. And amazingly enough, I didn’t get pissy about the traffic or the idiocy I saw on the roads. I had nowhere else to be, so it was fine. The next day was more of the same. I was able to leave after the worst of rush hour, but it still took me 65 minutes to go 40 miles. A lot of that was stop and go. I started playing my mental games, pretending the highway was a football field and looking for openings to squeeze through. Then I revisited my plans for world domination. Then I went back in time to remember some of my favorite football games. Then I got around to preparing for the meeting a bit. So again, I didn’t waste the time, but I don’t commute very often at all. So when I was on my way to the airport Monday morning again, and it took me 65 minutes to get there, I was running out of things to think about. 3 long commutes in less than a week took its toll. How many times can you take over the world? How many meetings can you mentally prepare for, knowing whatever I decide to do will be gone from my frontal cortex before I board the plane? Then I revisited my unwritten spy thriller novel. The plot still needs work, especially because I forgot all the great ideas I had during the commute. Ephemeral thoughts for the win. So when my father in law expressed a desire to stop commuting into Washington DC and move to an office closer to his home we were very supportive. Not just because he really shouldn’t be driving anywhere (he is 80 años, after all), but also he seems to have finally realized that he could have been talking to clients during the 60-90 minutes he spent in the car each way every day for the past 35 years. I’m decent at math, but I’m not going to do that calculation and I’m certainly not going to put a dollar figure on an opportunity cost comparable to the GDP of a small Central American nation. Which means the next time my biggest morning decision is which coffee shop to set up at for the day, I will be grateful that I have the flexibility to spend my time working. Not commuting. -Mike Photo credit: “Traffic in Brisbane” originally uploaded by Simon Forsyth Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. What CISOs Need to Know about Cloud Computing Adapting Security for Cloud Computing How the Cloud is Different for Security Introduction Defending Against Application Denial of Service Building Protections In Abusing Application Logic Attacking the Application Stack Newly Published Papers Security Awareness Training Evolution Firewall Management Essentials Continuous Security Monitoring API Gateways Threat Intelligence for Ecosystem Risk Management Dealing with Database Denial of Service Identity and Access Management for Cloud Services The 2014 Endpoint Security Buyer’s Guide The CISO’s Guide to Advanced Attackers Incite 4 U It comes down to trust: In the world of encryption we try to use advanced math to prove or disprove the effectiveness of ciphers, entropy collection, and the generation of pseudo-random numbers. But in some cases you simply cannot know the unknown, so it comes down to trust, which is why I think the developers of FreeBSD removing “RDRAND” and “Padlock” pseudo-random number generation (PRNG) facilities – provided by Intel and VIA respectively – is a good idea. There is concern that these routines might not free of NSA adaptation. Even better, they chose Yarrow as a replacement – a PRNG which John Kelsey and Bruce Schneier designed specifically because they neither trusted other PRNGs nor could find one that provided good randomness. Yarrow, like Blowfish, is an effective and trustworthy choice. Bravo! – AL Doing the web app two-step: I’m a big fan of 2FA (two factor authentication), especially for key web apps where I store stuff I’d rather not see on WikiLeaks. Like anything I have access to would be interesting to the conspiracy crowd, but let me dream, would ya? So I was pretty early to use Google’s 2FA for Apps, and at this point I have it set up on Twitter, Facebook, and Evernote. Why not? It’s easy, it works, and it makes it quite a bit harder for someone who isn’t me to access my accounts. But there is a

Share:
Read Post

Poor Man’s Immortality

One of our esteemed colleagues to the North, Dave Lewis, summed up a danger in almost everything in his recent CSO post, We need to be uncomfortable. Dave talks about realizing he could check out of a job and no one would notice, and how he knew it was time to find the next challenge. He’s right. The builders give way to the maintainers. Not that there is anything wrong with that per se. What I have seen happen in a few organizations is that they get used to doing things a very specific way and are not typically seen to think beyond the confines of their box. They have their infrastructure and governance framework to operate within and not a whole lot of incentive to approach things differently. They had become comfortable. Complacency kills innovation. It kills forward motion. If you are in a role for too long and you get too good at it, you can check out. That kills your motivation. And that’s fine for some folks. As Dave says, some people are maintainers. At the other end of that scale are builders. If you want any chance to be happy in this life, you had better know where you lie on that continuum. You put a builder into a maintainer role, and the dental treatment from Marathon Man would be a walk in the park. Likewise, you put a maintainer into a builder role and they quickly get paralyzed. So what to do? Embrace who you are and act accordingly. I learned that security teams cannot sit on their laurels and enjoy the ride. We as security practitioners as well as at an organizational level need to be uncomfortable. Let me explain what I mean. If your security practice or even you yourself have become stuck in a rut there needs to be a change. Whether that is moving on to a new job or simply reviewing the way security is being managed in the organization it should be clear that inertia kills. I’ll differ a little because I don’t think sitting on laurels and enjoying the ride are mutually exclusive. The role of building and improving and optimizing provides tremendous enjoyment for a guy like me. Whereas some folks fall in the opposite camp and sitting on their laurels is the ride. But Dave is right. If you are miscast in your current role you need to get back to who you are and what you do. Or it will get messy. You have to trust me on that one. So why did I call this post Poor Man’s Immortality? I know you are wondering. One of my mentors taught me that comfort can be viewed as a poor man’s immortality. Comfort intimates the desire for things to say the same to achieve a form of immortality. I’m not interested in that. As I look back, I live for the discomfort. That’s not a choice for everyone, but it is for me. And evidently for Dave as well. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.