Securosis

Research

Incite 7/1/2015: Explorers

When I take a step back I see I am pretty lucky. I’ve seen a lot of very cool places. And experienced a lot of different cultures through my business travels. And now I’m at a point in life where I want to explore more. Not just do business hotels and see the sights from the front seat of a colleague’s car or taxi. I want to explore and see all the cool things this big world has to offer. It hasn’t always been this way. For the first two decades of my career, I was so focused on getting to the next rung on the career ladder that I forgot to take in the sights. And forget about smelling the roses. That would take time away from my plans for world domination. In hindsight that was ridiculous. I’m certainly not going to judge others who still strive for world domination, but that does not interest me any more. I’m also at a point in life where my kids are growing up, and I only have a few more years to show them what I’ve learned is important to me. They’ll need to figure out what’s important to them, but in the meantime I have a chance to instill a love of exploration. An appreciation of cultures. And a yearning to see and experience the world. Not from the perspective of their smartphone screen, but by getting out there and experiencing life.   XX1 left for a teen tour last Saturday. Over the next month she’ll see a huge number of very cool things in the Western part of the US. The itinerary is fantastic, and made me wonder if I could take a month off to tag along. It’s not cheap and I’m very fortunate to be able to provide her with that opportunity. All I can do is hope that she becomes an explorer, and explores throughout her life. I have a cousin who just graduated high school. He’s going to do two years of undergrad in Europe to learn international relations – not in a classroom on a sheltered US campus (though there will be some of that), but out in the world. He’s also fortunate and has already seen some parts of the world, and he’s going to see a lot more over the next four years. It’s very exciting. You can bet I’ll be making at least two trips over there so we can explore Europe together. And no, we aren’t going to do backpacks and hostels. This boy likes hotels and nice meals. Of course global exploring isn’t for everyone. But it’s important to me, and I’m going to try my damnedest to impart that to my kids. But I have multiple goals. First, I think individuals who see different cultures and different ways of thinking are less likely to judge people with different views. Every day we sees the hazards of judgmental people who can’t understand other points of view and think the answer is violence and negativity. But it’s also clear that we move in a global business environment. Which means to prosper they will need to understand different cultures and appreciate different ways of doing things. It turns out the only way to really gain those skills is to get out there and explore. Coolest of all is the fact that we all need travel buddies. I can’t wait for the days when I explore with my kids – not as a parent/child thing, but as friends going to check out cool places. –Mike Photo credit: “Dora the Explorer” originally uploaded by Hakan Dahlstroem The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. May 26 – We Don’t Know Sh–. You Don’t Know Sh– May 4 – RSAC wrap-up. Same as it ever was. March 31 – Using RSA March 16 – Cyber Cash Cow March 2 – Cyber vs. Terror (yeah, we went there) February 16 – Cyber!!! February 9 – It’s Not My Fault! January 26 – 2015 Trends January 15 – Toddler December 18 – Predicting the Past November 25 – Numbness October 27 – It’s All in the Cloud October 6 – Hulk Bash September 16 – Apple Pay August 18 – You Can’t Handle the Gartner Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Threat Detection Evolution Analysis Data Collection Why Evolve? Network-based Threat Detection Operationalizing Detection Prioritizing with Context Looking for Indicators Overcoming the Limits of Prevention Applied Threat Intelligence Building a TI Program Use Case #3, Preventative Controls Use Case #2, Incident Response/Management Use Case #1, Security Monitoring Defining TI Network Security Gateway Evolution Introduction Recently Published Papers Endpoint Defense: Essential Practices Cracking the Confusion: Encryption & Tokenization for Data Centers, Servers & Applications Security and Privacy on the Encrypted Network Monitoring the Hybrid Cloud Best Practices for AWS Security Securing Enterprise Applications Secure Agile Development Trends in Data Centric Security Leveraging Threat Intelligence in Incident Response/Management The Future of Security Incite 4 U Polishing the crystal ball: Justin Somaini offers an interesting perspective on The Future of Security Solutions. He highlights a lot of disruptive forces poised to fundamentally change how security happens over the next couple of. To make the changes somewhat tangible and less overwhelming, Justin breaks the security world into a few buckets: Network Controls Management, Monitoring and Threat Response, Software Development, Application Management, Device Management, and Risk Management/GRC. Those buckets are

Share:
Read Post

Threat Detection: Analysis

As discussed in our last post, evolved threat detection’s first step is gathering internal and external security data. Once you have the data aggregated you need to analyze it to look for indications that you have compromised devices and/or malicious activity within your organization. Know Your Assets You know the old business adage: you can’t manage it if you can’t see it. In security monitoring parlance, you need to discover new assets – and changes to existing ones – to monitor them, and ultimately to figure out when a device has been compromised. A key aspect to threat detection remains discovery. The enemy of the security professional is surprise, so it is essential to always be aware of network topology and devices on the network. All devices, especially those pesky rogue wireless access points and other mobile devices, provide attack surface to adversaries. How can you make sure you are continuously discovering these devices? You scan your address space. Of course there is active scanning, but that runs periodically. To fill in between active scans, passive scanning watches network traffic streaming by to identify devices you haven’t seen or which have changed. Once a device is identified passively, you can launch an active scan to figure out what it’s doing (and whether it is legitimate). Don’t forget to discover your entire address space – which means both IPv4 and IPv6. Most discovery efforts focus on PCs and servers on the internal network. But that may not be enough anymore; it is typically endpoints that end up compromised, so you might want to discover both full computers and mobile devices. Finally, you will need to figure out how to discover assets in your cloud computing environments. This requires integration with cloud consoles to ensure you know about new cloud-based resources and can monitor them appropriately. After you have a handle on the devices within your environment, the next step is to classify them. We recommend a simple classification, involving roughly 4 groupings. The most important bucket includes critical devices with access to private information and/or valuable intellectual property. Next look for devices behaving maliciously. These devices may not have sensitive information, but adversaries can move laterally from compromised devices to critical devices. Then you have dormant devices, which may have connected to a command and control infrastructure but aren’t currently doing anything malicious. Finally, there are all the other devices which aren’t doing anything suspicious – which you likely don’t have time to worry about. We introduced this categorization in the Network-based Threat Detection series – check it out if you want more detail. Finally, we continue to harp on the criticality of a consistent process for threat detection. This includes discovery and classification. As with data collection, your technology environment is dynamic, so what you saw 10 minutes ago will have changed by 20 minutes in the future – or sooner. You need a strong process to ensure you always understand what is happening in your environment. The C Word Correlation has always been a challenge for security folks. It’s not because the math doesn’t work. Math works just fine. Event correlation has been a challenge because you needed to know what to look for at a very granular level. Given the kinds of attacks and advanced adversaries many organizations face, you cannot afford to count on knowing what’s coming, so it’s hard to find new and innovative attacks via traditional correlation. This has led to generally poor perceptions of SIEMs and IDS/IPS. But that doesn’t meant correlation is useless for security. Quite the opposite. Looking for common attributes, and linking events together into meaningful models of possible attacks, provides a meaningful way to investigate security events. And you don’t want to succumb to the same attacks over and over again, so it is still important to look for indicators of attacks that have been used against you. Even better if you can detect indicators reported by other organizations, via threat intelligence, and avoid those attacks entirely. Additionally you can (and should) stage out a number of reasonable attack patterns via threat modeling to look for common attacks. In fact, your vendor or service provider’s research team has likely built in some of these common patterns to kickstart your efforts at building out correlation rules, based on their research. These research teams also keep their correlation rules current, based on what they see in the wild. Of course you can never know all possible attacks. So you also need to apply behavioral and other advanced analytical techniques to catch attacks you have not seen. Looking for Outliers Technology systems have typical activity patterns. Whether network traffic, log events, transactions, or any other kind of data source, you can establish an activity profile for how systems normally behave. Once the profile is established you look for anomalous activity, or outliers, that may represent malicious activity. Theses outliers could be anything, from any data source you collect. With a massive trove of data, you can take advantage of advanced “Big Data” analytics (no, we don’t like that overly vague term). New technologies can reduce a huge amount of data to scan for abnormal activity patterns. You need an iterative process to refine thresholds and baseline over time. Yes, that means ongoing care and feeding of your security analytics. Activity evolves over time, so today’s normal might be anomalous in a month. Setting up these profiles and maintaining the analytics typically requires advanced skills. The new term for these professionals is data scientists. Yes, it’s a shiny term, and practitioners are expensive. But a key aspect of detecting threats is looking for outliers, and that requires data scientists, so you’ll need to pay up. Just ensure you have sufficient resources to investigate alerts coming from your analytics engine, because if you aren’t staffed to triage and validate alerts, you waste the benefit of earlier threat detection. Alternatively, organizations without these sophisticated internal resources should consider allowing a vendor or service provider to update and tune their correlation rules and analytics for detection. This

Share:
Read Post

Threat Detection Evolution: Data Collection

The first post in this series set the stage for the evolution of threat detection. Now that we’ve made the case for why detection must evolve, let’s work through the mechanics of what that actually means. It comes down to two functions: security data collection, and analytics of the collected data. First we’ll go through what data is helpful and where it should come from. Threat detection requires two main types of security data. The first is internal data, security data collected from your devices and other assets within your control. It’s the stuff the PCI-DSS has been telling you to collect for years. Second is external data, more commonly known as threat intelligence. But here’s the rub: there is no useful intelligence in external threat data without context for how that data relates to your organization. But let’s not put the cart before the horse. We need to understand what security data we have before worrying about external data. Internal Data You’ve likely heard a lot about continuous monitoring because it is such a shiny and attractive term to security types. The problem we described in Vulnerability Management Evolution is that ‘continuous’ can have a bunch of different definitions, depending on who you are talking to. We have a rather constructionist view (meaning, look at the dictionary) and figure the term means “without cessation.” But in many cases, monitoring assets continually doesn’t really add much value over regular and reliable daily monitoring. So we prefer consistent monitoring of internal resources. That may mean truly continuous, for high-profiles asset at great risk of compromise. Or possibly every week for devices/servers that don’t change much and don’t access high-value data. But the key here is to be consistent about when you collect data from resources, and to ensure the data is reliable. There are many data sources you might collect from for detection, including: Logs: The good news is that pretty much all your technology assets generate logs in some way, shape, or form. Whether it’s a security or network device, a server, an endpoint, or even mobile. Odds are you can’t manage to collect data from everything, so you’ll need to choose which devices to monitor, but pretty much all devices generate log data. Vulnerability Data: When trying to detect a potential issue, knowing which devices are vulnerable to what can be important for narrowing down your search. If you know a certain attack targets a certain vulnerability, and you only have a handful of devices that haven’t been patched to address the vulnerability, you know where to look. Configuration Data: Configuration data yields similar information to vulnerability data for providing context to understand whether a device could be exploited by a specific attack. File Integrity: File integrity monitoring provides important information for figuring out which key files have changed. If a system file has been tampered with outside of an authorized change, it may indicate nefarious activity and should be checked out. Network Flows: Network flow data can identify patterns of typical (normal) network activity; which enables you to look for patterns which aren’t exactly normal and could represent reconnaissance, lateral movement, or even exfiltration. Once you decide what data to collect, you have figure out from where and how much. This involves selecting logical collection points and where to aggregate the data. This depends on the architecture of your technology stack. Many organization opt for a centralized aggregation point to facilitate end-to-end analysis, but that is contingent on the size of the organization. Large enterprises may not be able to handle the scale of collecting everything in one place, and should consider some kind of hierarchical collection/aggregation strategy where data is stored and analyzed locally, and then a subset of the data is sent upstream for central analysis. Finally, we need to mention the role of the cloud in collection and aggregation, because almost everything is being offered either in the cloud or as a Service nowadays. The reality is that cloud-based aggregation and analysis depend on a few things. The first is the amount of data. Moving logs or flow records is not a big deal because they are pretty small and highly compressible. Moving network packets is a much larger endeavor, and hard to shift to a cloud-based service. The other key determinant is data sensitivity – some organizations are not comfortable with their key security data outside their control in someone else’s data center/service. That’s an organizational and cultural issue, but we’ve seen a much greater level of comfort with cloud-based log aggregation over the past year, and expect it to become far more commonplace inside a 2-year planning horizon. The other key aspect of internal data collection is integration and normalization of the data. Different data sources have different data formats, which creates a need to normalize data to compare datasets. That involves compromise in terms of granularity of common data formats, and can favor an integrated approach where all data sources are already integrated into a common security data store. Then you (as the practitioner) don’t really need to worry about making all those compromises – instead you can bet that your vendor or service provider has already done the work. Also consider the availability of resources for dealing with these disparate data sources. The key issue, mentioned in the last post, remains the skills shortage; so starting a data aggregation/collection effort that depends on skilled resources to manage normalization and integration of data may not be the best idea. This doesn’t really have much to do with the size of the organization – it’s really about the sophistication of staff – security data integration is an advanced function that can be beyond even large organizations with less mature security efforts. Ultimately your goal is visibility into your entire technology infrastructure. An end-to-end view of what’s happening in your environment, wherever your data is, gives you a basis for evolving your detection capabilities. External Data We have published a lot of research on threat intel to date, most recently a series on Applied Threat Intelligence, which summarized the three main use cases we see for external data. There

Share:
Read Post

Incite 6/10/2015: Twenty Five

This past weekend I was at my college reunion. It’s been twenty five years since I graduated. TWENTY FIVE. It’s kind of stunning when you think about it. I joked after the last reunion in 2010 that the seniors then were in diapers when I was graduating. The parents of a lot of this year’s seniors hadn’t even met. Even scarier, I’m old enough to be their parent. It turns out a couple friends who I graduated with actually have kids in college now. Yeah, that’s disturbing. It was great to be on campus. Life is busy, so I only see some of my college friends every five years. But it seems like no time has passed. We catch up about life and things, show some pictures of our kids, and fall right back into the friendships we’ve maintained for almost thirty years. Facebook helps people feel like they are still in touch, but we aren’t. Facebook isn’t real life – it’s what you want to show the world. Fact is, everything changes, and most of that you don’t see. Some folks have been through hard times. Others are prospering.   Even the campus has evolved significantly over the past five years. The off-campus area is significantly different. Some of the buildings, restaurants, & bars have the same names; but they aren’t the same. One of our favorite bars, called Rulloff’s, shut down a few years back. It was recently re-opened and pretty much looked the same. But it wasn’t. They didn’t have Bloody Marys on Thursday afternoon. The old Rulloff’s would have had galloons of Bloody Mix preparing for reunion, because that’s what many of us drank back in the day. The new regime had no idea. Everything changes. Thankfully a bar called Dunbar’s was alive and well. They had a drink called the Combat, which was the root cause of many a crazy night during college. It was great to go into D-bars and have it be pretty much the same as we remembered. It was a dump then, and it’s a dump now. We’re trying to get one of our fraternity brothers to buy it, just to make sure it remains a dump. And to keep the Combats flowing. It was also interesting to view my college experience from my new perspective. Not to overdramatize, but I am a significantly different person than I was at the last reunion. I view the world differently. I have no expectations for my interactions with people, and am far more accepting of everyone and appreciative of their path. Every conversation is an opportunity to learn, which I need. I guess the older I get, the more I realize I don’t know anything. That made my weekend experience all the more gratifying. The stuff that used to annoy me about some of my college friends was no longer a problem. I realized it has always been my issue, not theirs. Some folks could tell something was different when talking to me, and that provided an opportunity to engage at a different level. Others couldn’t, and that was fine by me; it was fun to hear about their lives. In 5 years more stuff will have changed. XX1 will be in college herself. All of us will undergo more life changes. Some will grow, others won’t. There will be new buildings and new restaurants. And I’ll still have an awesome time hanging out in the dorms until the wee hours drinking cocktails and enjoying time with some of my oldest friends. And drinking Combats, because that’s what we do. –Mike Photo credit: “D-bars” taken by Mike in Ithaca NY The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. May 26 – We Don’t Know Sh–. You Don’t Know Sh– May 4 – RSAC wrap-up. Same as it ever was. March 31 – Using RSA March 16 – Cyber Cash Cow March 2 – Cyber vs. Terror (yeah, we went there) February 16 – Cyber!!! February 9 – It’s Not My Fault! January 26 – 2015 Trends January 15 – Toddler December 18 – Predicting the Past November 25 – Numbness October 27 – It’s All in the Cloud October 6 – Hulk Bash September 16 – Apple Pay August 18 – You Can’t Handle the Gartner Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Threat Detection Evolution Why Evolve? Network-based Threat Detection Operationalizing Detection Prioritizing with Context Looking for Indicators Overcoming the Limits of Prevention Applied Threat Intelligence Building a TI Program Use Case #3, Preventative Controls Use Case #2, Incident Response/Management Use Case #1, Security Monitoring Defining TI Network Security Gateway Evolution Introduction Recently Published Papers Endpoint Defense: Essential Practices Cracking the Confusion: Encryption & Tokenization for Data Centers, Servers & Applications Security and Privacy on the Encrypted Network Monitoring the Hybrid Cloud Best Practices for AWS Security Securing Enterprise Applications Secure Agile Development Trends in Data Centric Security Leveraging Threat Intelligence in Incident Response/Management The Future of Security Incite 4 U Vulnerabilities are not intrusions: Richard Bejtlich is a busy guy. As CSO of FireEye, I’m sure his day job keeps him pretty busy, as well as all his external responsibilities to gladhand big customers. So when he writes something on his personal blog you know he’s pissed off. And he’s really pissed that it seems parties within the US federal government doesn’t understand the different between vulnerabilities and intrusions. In the

Share:
Read Post

Threat Detection Evolution: Why Evolve? [New Series]

As we discussed recently in Network-based Threat Detection, prevention isn’t good enough any more. Every day we see additional proof that adversaries cannot be reliably stopped. So we have started to see the long-awaited movement of focus and funding from prevention, to detection and investigation. That said, for years security practitioners have been trying to make sense of security data to shorten the window between compromise and detection – largely unsuccessfully. Not to worry – we haven’t become the latest security Chicken Little, warning everyone that the sky is falling. Mostly because it fell a long time ago, and we have been working to pick up the pieces ever since. It can be exhausting to chase alert after alert, never really knowing which are false positives and which indicate real active adversaries in your environment. Something has to change – it is time to advance the practice of detection, to provide better and more actionable alerts. This requires thinking more broadly about detection, and starting to integrate the various different security monitoring systems in use today. So it’s time to bring our recent research on detection and threat intelligence together within the context of Threat Detection Evolution. As always, we are thankful that some forward-looking organizations see value in licensing our content to educate their customers. AlienVault plans to license the resulting paper at the conclusion of the series, and we will build the content using our Totally Transparent Research methodology. (Mostly) Useless Data There is no lack of security data. All your devices stream data all the time. Network devices, security devices, servers, and endpoints all generate a ton of log data. Then you collect vulnerability data, configuration data, and possibly network flows or even network packets. You look for specific attacks with tools like intrusion detection devices and SIEM, which generate lots of alerts. You probably have all this security data in a variety of places, with separate policies to generate alerts implemented within each monitoring tool. It’s hard enough to stay on top of a handful of consoles generating alerts, but when you get upwards of a dozen or more, getting a consistent view of your environment isn’t really feasible. It’s not that all this data is useless. But it’s not really useful either. There is value in having the data, but you can’t really unlock its value without performing some level of integration, normalization, and analytics on the data. We have heard it said that finding attackers is like finding a needle in a stack of needles. It’s not a question of whether there is a needle there – you need to figure out which needle is the one poking you. This amount of traffic and activity generates so much data that it is trivial for adversaries to hide in plain sight, obfuscating their malicious behavior in a morass of legitimate activity. You cannot really figure out what’s important until it’s too late. And it’s not getting easier – cloud computing and mobility promise to disrupt the traditional order of how technology is delivered and information is consumed by employees, customers, and business partners, so there will be more data and more activity to further complicate threat detection. Minding the Store… In the majority of our discussions with practitioners, sooner or later we get around to the challenge of finding skilled resources to implement the security program. It’s not a funding thing – companies are willing to invest, given the high profile of threats. The challenge is resource availability, and unfortunately there is no easy fix. The security industry is facing a large enough skills gap that there is no obvious answer. Why can’t security practitioners be identified? What are the constraints on training more people to do security? It is actually pretty counter-intuitive, because security isn’t a typical job. It’s hard for a n00b to come in and be productive their first couple years. Even those with formal (read: academic) training in security disciplines need a couple years of operational experience before they start to become productive. And a particular mindset is required to handle a job where true success is a myth. It’s not a matter of whether an organization will be breached – it’s when, and that is hard for most people to deal with day after day. Additionally, if your organization is not a Global 1000 company or major consulting firm, finding qualified staff is even harder because you have many of the same problems as a large enterprise, but far less budget and available skills to solve it. Clearly what we are doing is insufficient to address the issue moving forward. So we need to look at the problem differently. It’s not a challenge that can be fixed by throwing people at it, because there aren’t enough people. It’s not a challenge that can be fixed by throwing products at it either, because organizations both large and small have been trying for years with poor results. Our industry needs to evolve its tactics to focus on doing the most important things more efficiently. Efficiency and Integration When you don’t have enough staff you need to make your existing staff far more efficient. That typically involves two different tactics: Minimize False Positives and Negatives: The thing that burns up more time than anything else is chasing alerts into ratholes and then finding out that they are out to be false positives. So making sure alerts represent real risk is the best efficiency increase you can manage. Obviously you also want to minimize false negatives because when you miss an attack you will spend a ton of time cleaning it up. Overall you need to focus on minimizing errors to get better utilization out of your limited staff. Automate: The other aspect of increasing efficiency is automation of non-strategic functions where possible. There isn’t a lot of value in making individual IPS rule changes based on reliable threat intel or vulnerability data. Once you can trust your automation, you can have your folks do tasks that aren’t suited to automation, like triaging possible attacks. The other way to make better

Share:
Read Post

Network Security Gateway Evolution [New Series]

(Note: We’re restarting this series over the next week, so we are reposting the intro to get things moving again. – Mike ) When is a firewall not a firewall? I am not being cute – that is a serious question. The devices that masquerade as firewalls today provide much more than just an access control on the edge of your network(s). Some of our influential analyst friends dubbed the category next generation firewall (NGFW), but that criminally undersells the capabilities of these devices. The “killer app” for NGFW remains enforcement of security policies by application (and even functions within applications), rather than merely by ports and protocols. This technology has matured since we last covered the enterprise firewall space in Understanding and Selecting an Enterprise Firewall. Virtually all firewall devices being deployed now (except very low-end gear) have the ability to enforce application-level policies in some way. But, as with most new technologies, having new functionality doesn’t mean the capabilities are being used well. Taking full advantage of application-aware policies requires a different way of thinking about network security, which will take time for the market to adapt to. At the same time many network security vendors continue to integrate their previously separate FW and IPS devices into common architectures/platforms. They have also combined network-based malware detection and some light identity and content filtering/protection features. If this sounds like UTM, that shouldn’t be surprising – the product categories (UTM and NGFW) provide very similar functionality, just handled differently under the hood. Given this long-awaited consolidation, we see rapid evolution in the network security market. Besides additional capabilities integrated into NGFW devices, we also see larger chassis-based models, smaller branch office devices, and even virtualized and cloud-based configurations to extend these capabilities to every point in the network. Improved threat intelligence integration is also available to block current threats. Now is a good time to revisit our research from a couple years ago. The drivers for selection and procurement have changed since our last look at the field. But, as mentioned above, these devices are much more than firewalls. So we use the horribly pedestrian Network Security Gateway moniker to describe what network security devices look like moving forward. We are pleased to launch the Network Security Gateway Evolution series, describing how to most effectively use the devices for the big 3 network security functions: access control (FW), threat prevention (IPS), and malware detection. Given the forward-looking nature of our research, we will dig into a few additional use cases we are seeing – including data center segmentation, branch office protection, and protecting those pesky private/public cloud environments. As always, we develop our research using our Totally Transparent Research methodology, ensuring no hidden influence on the research. The Path to NG Before we jump into how the NSG is evolving, we need to pay our respects to where it has been. The initial use case for NGFW was sitting next to an older port/protocol firewall and providing visibility int which applications are being used, and by whom. Those reports showing, in gory detail, all the nonsense employees get up to on the corporate network (much of it using corporate devices) at the end of the product test, tend to be quite pretty enlightening for the network security team and executives. Once your organization saw the light with real network activity, you couldn’t unsee it. So you needed to take action, enforcing policies on those applications. This action leveraged capabilities such as blocking email access via a webmail interface, detecting and stopping file uploads to Dropbox, and detecting/preventing Facebook photo uploads. It all sounds a bit trivial nowadays, but a few years ago organizations had real trouble enforcing this kind of policies on web traffic. Once the devices were enforcing policy-based control over application traffic, and then matured to offer feature parity with existing devices in areas like VPN and NAT, we started to see significant migration. Some of the existing network security vendors couldn’t keep up with these NGFW competitive threats, so we have seen a dramatic shift in the enterprise market share over the past few years, creating a catalyst for multi-billion M&A. The next step has been the move from NGFW to NSG through adding non-FW capabilities such as threat prevention. Yes, that means not only enforcement of positive policies (access control), but also detecting attacks like a network intrusion prevention device (IPS) works. The first versions of these integrated devices could not compare to a ‘real’ (standalone) IPS, but as time marches on we expect NSGs to reach feature parity for threat prevention. Likewise, these gateways are increasingly integrating detection malware files as they enter the network, in order to provide additional value. Finally, some companies couldn’t replace their existing firewalls (typically for budget or political reasons), but had more flexibility to replace their web filters. Given the ability of NSGs to enforce policies on web applications, block bad URLs, and even detect malware, standalone web filters took a hit. As with IPS, NSGs do not yet provide full feature parity with standalone web filters yet. But many companies don’t need the differentiating features of a dedicated web filter – making an NSG a good fit. The Need for Speed We have shown how NSGs have and will continue to integrate more and more functionality. Enforcing all these policies at wire speed requires increasing compute power. And it’s not like networks are slowing down. So first-generation NGFW reached scaling constraints pretty quickly. Vendors continue to invest in bigger iron, including more capable chassis and better distributed policy management, to satisfy scalability requirements. As networks continue to get faster, will the devices be able to keep pace, retaining all their capabilities on a single device? And do you even need to run all your stuff on the same device? Not necessarily. This raises an architectural question we will consider later in the series. Just because you can run all these capabilities on the same device, doesn’t mean you should… Alternatively you can run a NSG in “firewall” mode, just enforcing basic access control policies. Or

Share:
Read Post

Incite 5/20/2015: Slow down [to speed up]

When things get very busy it’s hard to stay focused. There is so much flying at you, and so many things stacking up. Sometimes you just do the easy things because they are easy. You send the email, you put together the proposal, you provide feedback on the document. It can be done in 15 minutes, so you do it. Leaving the bigger stuff for later. At least I do. Then later becomes the evening, and the big stuff is still lagging. I pop open the laptop and try to dig into the big stuff, but that’s very hard to do at the end of the day. For me, at least. In the meantime a bunch more stuff showed up in the inbox. A couple more things need to get done. Some easy, some hard. So you run faster, get up earlier, rearrange the list, get something done. Wash, rinse, repeat. Sure, things get done. But I need to ask whether it’s the right stuff. Not always.   I know this is a solved problem. For others. They’ll tell me about their awesome Kanban workflow to control unplanned work. How they use a Pomodoro timer to make sure they give themselves enough time to get something done. Someone inevitably busts out some GTD goodness or possibly some Seven Habits wisdom. Sigh. Here’s the thing. I have a system. It works. When I use it. The lack of a system isn’t my problem. It’s that I’m running too fast. I need to slow down. When I slow down things come into focus. Sure, more stuff may pile up. But not all that stuff will need to get done. The emails will still be there. The proposal will get written, when I have a slot open to actually do the work. And when I say slow down, that doesn’t mean work less. It means give myself time to mentally explore and wander. With nowhere to be. With nothing to achieve. I do that through meditation, which I haven’t done consistently over the last few months. I prioritized my physical practices (running and yoga) for the past few months, at the expense of my mental practice. I figured if I just follow my breath when running I can address both my mental and physical practice at the same time. Efficiency, right? Nope. Running and yoga are great. But I get something different from meditation. I’m most effective when I have time to think. To explore. To indulge my need to go down paths that may not seem obvious at first. I do that when meditating. I see the thought and sometimes I follow it down a rathole. I don’t know where it will go or what I’ll learn. I follow it anyway. Sometimes I just let the thought pass and return my awareness to the breath. But one thing is for sure – my life flows a lot easier when I’m meditating every day. Which is all that matters. So forgive me if I don’t respond to your email within the hour. I’ll forgive myself for letting things pile up on my to do list. The emails and tasks will be there when I’m done meditating. It turns out I will be able to work through lists much more efficiently once I give myself space to slow down. Strangely enough, that allows me to speed up. –Mike Photo credit: “Slow Down” originally uploaded by Tristan Schmurr The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. May 4 – RSAC wrap-up. Same as it ever was. March 31 – Using RSA March 16 – Cyber Cash Cow March 2 – Cyber vs. Terror (yeah, we went there) February 16 – Cyber!!! February 9 – It’s Not My Fault! January 26 – 2015 Trends January 15 – Toddler December 18 – Predicting the Past November 25 – Numbness October 27 – It’s All in the Cloud October 6 – Hulk Bash September 16 – Apple Pay August 18 – You Can’t Handle the Gartner Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Network-based Threat Detection Operationalizing Detection Prioritizing with Context Looking for Indicators Overcoming the Limits of Prevention Applied Threat Intelligence Building a TI Program Use Case #3, Preventative Controls Use Case #2, Incident Response/Management Use Case #1, Security Monitoring Defining TI Network Security Gateway Evolution Introduction Recently Published Papers Endpoint Defense: Essential Practices Cracking the Confusion: Encryption & Tokenization for Data Centers, Servers & Applications Security and Privacy on the Encrypted Network Monitoring the Hybrid Cloud Best Practices for AWS Security Securing Enterprise Applications Secure Agile Development Trends in Data Centric Security Leveraging Threat Intelligence in Incident Response/Management The Future of Security Incite 4 U Don’t believe everything you read: The good news about Securosis’ business is that we don’t have to chase news. Sure, if there is something timely and we have room on our calendar, we’ll comment on current events. But if you look at our blog lately it’s clear we’re pretty busy. So we didn’t get around to commenting on this plane hacking stuff. But if we wait around long enough, one of our friends will say pretty much what I’m thinking. So thanks to Wendy who summed up the situation nicely. And that reminds me of something I have to tell my kids almost every day. Don’t believe everything you read on the Internet. You aren’t getting the full story. Media outlets,

Share:
Read Post

Network-based Threat Detection: Operationalizing Detection

As we wrap up our Network-based Threat Detection series, we have already covered why prevention isn’t good enough and how to find indications that an attack is happening, based on what you see on the network. Our last post worked through adding context to collected data to allow some measure of prioritization for alerts. To finish things off we will discuss additional context and making alerts operationally useful. Leveraging Threat Intelligence for Detection This analysis is still restricted to your organization. You are gathering data from your networks and adding context from your enterprise systems. Which is great but not enough. Factoring data from other organizations into your analysis can help you refine it and prioritize your activities more effectively. Yes, we are talking about using threat intelligence in your detection process. For prevention, threat intel can be useful to decide which external sites should be blocked on your egress filters, based on reputation and possibly adversary analysis. This approach helps ensure devices on your network don’t communicate with known malware sites, bot networks, phishing sites, watering hole servers, or other places on the Internet you want nothing to do with. Recent conversations with practitioners indicate much greater willingness to block traffic – so long as they have confidence in the alerts. But this series isn’t called Network-based Threat Prevention, so how does threat intelligence help with detection? TI provides a view of network traffic patterns used in attacks on other organizations. Learning about these patterns enables you to look for them (Domain Generating Algorithms, for example) within your own environment. You might also see indicators of internal reconnaissance or lateral movement typically used by certain adversaries, and use them to identify attacks in process. Watching for bulk file transfers, for example, or types of file encryption known to be used by particular crime networks, could yield insight into exfiltration activities. Like the burden of proof is far lower in civil litigation than in criminal litigation, the bar for useful accuracy is far lower in detection modes than in prevention. When you are blocking network traffic for prevention, you had better be right. Users get cranky when you block legitimate network sessions, so you will be conservative about what you block. That means you will inevitably miss something – the dreaded false negative, a legitimate attack. But firing an alert provides more leeway, so you can be a bit less rigid. That said, you still want to be close – false positives are still very expensive. This is where the approach mapped out in our last post comes into play. If you see something that looks like an attack based on external threat intel, you apply the same contextual filters to validate and prioritize. Retrospection What happens when you don’t know an attack is actually an attack when the traffic enters your network? This happens every time a truly new attack vector emerges. Obviously you don’t know about it, so your network controls will miss it and your security monitors won’t know what to look for. No one has seen it yet, so it doesn’t show up in threat intel feeds. So you miss, but that’s life. Everyone misses new attacks. The question is: how long do you miss it? One of the most powerful concepts in threat intelligence is the ability to use newly discovered indicators and retrospectively look through security data to see if an attack has already hit you. When you get a new threat intel indicator you can search your network telemetry (using your fancy analytics engine) to see if you’ve seen it before. This isn’t optimal because you already missed. But it’s much better than waiting for an attacker to take the next step in the attack chain. In the security game nothing is perfect. But leveraging the hard-won experience of other organizations makes your own detection faster and more accurate. A Picture Is Worth a Thousand Words At this point you have alerts, and perhaps some measure of prioritization for them. But one of the most difficult tasks is deciding how to navigate through the hundreds or thousands of alerts that happen in networks at scale. That’s where visualization techniques come into play. A key criterion for choosing a detection offering is getting information presented in a way that makes sense to you and will work in your organization’s culture. Some like the traditional user experience, which looks like a Top 10 list of potentially compromised devices, with the grid showing details of the alert. Another way to visualize detection data is as a heat map showing devices and potential risks visually, offering drill-down into indicators and alert causes. There is no right or wrong here – it is just a question of what will be most effective for your security operations team. Operationalizing Detection As compelling as network-based threat detection is conceptually, a bunch of integration needs to happen before you can provide value and increase your security program’s effectiveness. There are two sides to integration: data you need for detection, and information about alerts that is sent to other operational systems. For the former, these connections to identity systems and external threat intelligence drive analytics for detection. The latter includes the ability to pump the alert and contextual data to your SIEM or other alerting system to kick off your investigation process. If you get comfortable enough with your detection results you can even configure workarounds such as IPS blocking rules based on these alerts. You might prevent compromised devices from doing anything, blocking C&C traffic, or block exfiltration traffic. As described above, prevention demands minimization of false positives, but disrupting attackers can be extremely valuable. Similarly, integration with Network Access Control can move a compromised device onto a quarantine network until it can be investigated and remediated. For network forensics you might integrate with a full packet capture/network forensics platform. In this use case, when a device shows potential compromise, traffic to and from it could be captured for forensic analysis. Such captured network traffic may provide a proverbial smoking gun. This approach could also make you popular

Share:
Read Post

Network-based Threat Detection: Prioritizing with Context

During speaking gigs we ask how many in the audience actually get through their to-do list every day. Usually we get one or two jokers in the crowd between jobs, or maybe just trying to troll us a bit. But nobody in a security operational role gets everything done every day. So the critical success factor is to make sure you are getting the right things done, and not burning time on activities that don’t reduce risk or contain attack damage. Underpinning this series is the fact that prevention inevitably fails at some point. Along with a renewed focus on network-based detection, that means your monitoring systems will detect a bunch of things. But which alerts are important? Which represent active adversary activity? Which are just noise and need to be ignored? Figuring out which is which is where you need the most help. To use a physical security analogy, a security fence will alert regularly. But you need to figure out whether it’s a confused squirrel, a wayward bird, a kid on a dare, or the offensive maneuver of an adversary. Just looking at the alert won’t tell you much. But if you add other details and additional context into your analysis, you can figure out which is which. The stakes are pretty high for getting this right, as the postmortems of many recent high-profile breaches indicated alerts did fire – in some cases multiple times from multiple systems – but the organizations failed to take action… and suffered the consequences. Our last post listed network telemetry you could look for to indicate potential malicious activity. Let’s say you like the approach laid out in that post and decide to implement it in your own monitoring systems. So you flip the switch and the alerts come streaming in. Now comes the art: separating signal from noise and narrowing your focus to the alerts that matter and demand immediate attention. You do this by adding context to general network telemetry and then using an analytics engine to crunch the numbers. To add context you can leverage both internal and external information. At this point we’ll focus on internal data, because you already have that and can implement it right away. Our next post will tackle external data, typically accessible via a threat intelligence feed. Device Behavior You start by figuring out what’s important – not all devices are created equal. Some store very important data. Some are issued to employees with access to important data, typically executives. But not all devices present a direct risk to your organization, so categorizing them provides the first filter for prioritization. You can use the following hierarchy to kickstart your efforts: Critical devices: Devices with access to protected information and/or particularly valuable intellectual property should bubble to the top. Fast. If a device on a protected and segmented network shows indications of compromise, that’s bad and needs to be dealt with immediately. Even if the device is dormant, traffic on a protected network that looks like command and control constitutes smoke, and you need to act quickly to ensure any fire doesn’t spread. Or enjoy your disclosure activities… Active malicious devices: If you see device behavior which indicates an active attack (perhaps reconnaissance, moving laterally within the environment, blasting bits at internal resources, or exfiltrating data), that’s your next order of business. Even if the device isn’t considered critical, if you don’t deal with it promptly the compromise might find an exploitable hole to a higher-value device and move laterally within the organization. So investigate and remediate these devices next. Dormant devices: These devices at some point showed behavior consistent with command and control traffic (typically staying in communication with a C&C network), but aren’t doing anything malicious at the moment. Given the number of other fires raging in your environment, you may not have time to remediate these dormant devices immediately. These priorities are fairly coarse but should be sufficient. You don’t want a complicated multi-tier rating system which is too involved to use on a daily basis. Priorities should be clear. If you have a critical device that is showing malicious activity, that’s a red alert. Critical devices that throw alerts need to be investigated next, and then non-critical devices showing malicious activity. Finally, after you have all the other stuff done, you can get around to dealing with devices you’re pretty sure are compromised. Of course this last bucket might show malicious activity at any time, so you still need to watch it. The question is when you remediate. This categorization helps, but within each bucket you likely have multiple devices. So you still need additional information and context to make decisions. Who and Where Not all employees are created equal either. Another source of context is user identity, and there are a bunch of groups you need to pay attention to. The first is people with elevated privileges, such as administrators and others with entitlements to manage devices that hold critical information. They can add, delete, delete, change accounts and access rules on the servers, and manipulate data. They have access to tamper with logs, and basically can wreck an environment from the inside. There are plenty of examples of rogue or disgruntled administrators making a real mess, so when you see an administrator’s device behaving strangely, that should bubble up to the top of your list. The next group of folks to watch closely are executives with access to financials, company strategy, and other key intellectual property. These users are attacked most frequently via phishing and other social engineering, so they need to be watched closely – even trained, they aren’t perfect. This may trigger organizational backlash – some executives get cranky when they are monitored. But that’s not your problem, and without this kind of context it’s hard to do your job. So dig in and make your case to the executives for why it’s important. As you look for indicators that devices are connecting to a C&C server or performing reconnaissance, you are protecting the organization, and

Share:
Read Post

Incite 5/6/2015: Just Be

I’m spent after the RSAC. By Friday I have been on for close to a week. It’s nonstop, from the break of dawn until the wee hours of the morning. But don’t feel too bad – it’s one of my favorite weeks of the year. I get to see my friends. I do a bunch of business. And I get a feel for how close our research is to reflecting the larger trends in the industry. But it’s exhausting. When the kids were smaller I would fly back early Friday morning and jump back into the fray of the Daddy thing. I had very little downtime and virtually no opportunity to recover. Shockingly enough, I got sick or cranky or both. So this year I decided to do it differently. I stayed in SF through the weekend to unplug a bit.   I made no plans. I was just going to flow. There was a little bit of structure. Maybe I would meet up with a friend and get out of town to see some trees (yes, Muir Woods was on the agenda). I wanted to catch up with a college buddy who isn’t in the security business, at some point. Beyond that, I’d do what I felt like doing, when I felt like doing it. I wasn’t going to work (much) and I wasn’t going to talk to people. I was just going to be. Turns out my friend wasn’t feeling great, so I was solo on Friday after the closing keynote. I jumped in a Zipcar and drove down to Pacifica. Muir Woods would take too long to reach, and I wanted to be by the water. Twenty minutes later I was sitting by the ocean. Listening to the waves. The water calms me and I needed that. Then I headed back to the city and saw an awesome comedian was playing at the Punchline. Yup, that’s what I did. He was funny as hell, and I sat in the back with my beer and laughed. I needed that too. Then on Saturday I did a long run on the Embarcadero. Turns out a cool farmer’s market is there Saturdays. So I got some fruit to recover from the run, went back to the hotel to clean up, and then headed back to the market. I sat in a cafe and watched people. I read a bit. I wrote some poetry. I did a ZenTangle. I didn’t speak to anyone (besides a quick check-in with the family) for 36 hours after RSA ended. It was glorious. Not that I don’t like connecting with folks. But I needed a break. Then I had an awesome dinner with my buddy and his wife, and flew back home the next day in good spirits, ready to jump back in. I’m always running from place to place. Always with another meeting to get to, another thing to write, or another call to make. I rarely just leave myself empty space with no plans to fill it. It was awesome. It was liberating. And I need to do it more often. This is one of the poems I wrote, watching people rushing around the city. Rush You feel them before you see They have somewhere to be It’s very important Going around you as quickly as they can. They are going places. Then another And another And another Constantly rushing But never catching up. They are going places. Until they see that right here is the only place they need to be. – MSR, 2015 –Mike Photo credit: “65/365: be. [explored]“_ originally uploaded by It’s Holly The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. March 31 – Using RSA March 16 – Cyber Cash Cow March 2 – Cyber vs. Terror (yeah, we went there) February 16 – Cyber!!! February 9 – It’s Not My Fault! January 26 – 2015 Trends January 15 – Toddler December 18 – Predicting the Past November 25 – Numbness October 27 – It’s All in the Cloud October 6 – Hulk Bash September 16 – Apple Pay August 18 – You Can’t Handle the Gartner Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Network-based Threat Detection Looking for Indicators Overcoming the Limits of Prevention Applied Threat Intelligence Building a TI Program Use Case #3, Preventative Controls Use Case #2, Incident Response/Management Use Case #1, Security Monitoring Defining TI Network Security Gateway Evolution Introduction Recently Published Papers Endpoint Defense: Essential Practices Cracking the Confusion: Encryption & Tokenization for Data Centers, Servers & Applications Security and Privacy on the Encrypted Network Monitoring the Hybrid Cloud Best Practices for AWS Security Securing Enterprise Applications Secure Agile Development Trends in Data Centric Security Leveraging Threat Intelligence in Incident Response/Management The Future of Security Incite 4 U Threat intel still smells like poop? I like colorful analogies. I’m sad that my RSAC schedule doesn’t allow me to see some of the more interesting sessions by my smart friends. But this blow-by-blow of Rick Holland’s Threat Intelligence is Like Three-Day Potty Training makes me feel like I was there. I like the maturity model, and know many large organization invest a boatload of cash in threat intel, and as long as they take a process-centric view (as Rick advises) they can get great value from that investment. But I’m fixated on the not Fortune 500. You know, organizations with a couple folks on the security team

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.