Securosis

Research

Shining a Light on Shadow Devices: Attacks

What is the real risk of the Shadow Devices we described back in our first post? It is clear that more organizations don’t really take their risks seriously. They certainly don’t have workarounds in place, or proactively segment their environments to ensure that compromising these devices doesn’t provide opportunity for attackers to gain presence and a foothold in their environments. Let’s dig into three broad device categories to understand what attacks look like. Peripherals Do you remember how cool it was when the office printer got a WiFi connection? Suddenly you could put it wherever you wanted, preserving the Feng Shui of your office, instead of having it tethered to the network drop. And when the printer makers started calling their products image servers, not just printers? Yeah, that was when they started becoming more intelligent, and also tempting targets. But what is the risk of taking over a printer? It turns out that even in our paperless offices of the future, organizations still print out some pretty sensitive stuff, and stuff they don’t want to keep may be scanned for storage/archival. Whether going in or out, sensitive content is hitting imaging servers. Many of them store the documents they print and scan until their memory (or embedded hard drive) is written over. So sensitive documents persist on devices, accessible to anyone with access to the device, either physical or remote. Even better, many printers are vulnerable to common wireless attacks like the evil twin, where a fake device with a stronger wireless signal impersonates the real printer. So devices connect (and print) documents to the evil twin and not the real printer – the same attack works with routers too, but the risk is much broader. Nice. But that’s not all! The devices typically use some kind of stripped-down UNIX variant at the core, and many organizations don’t change the default passwords on their image servers, enabling attackers to trigger remote firmware updates and install compromised versions of the printer OS. Another attack vector is that these imaging devices now connect to cloud-based services to email documents, so they have all the plumbing to act as a spam relay. Most printers use similar open source technologies to provide connectivity, so generic attacks generally work against a variety of manufacturers’ devices. These peripherals can be used to steal content, attack other devices, and provide a foothold inside your network perimeter. That makes these both direct and indirect targets. These attacks aren’t just theoretical. We have seen printers hijacked to spread inflammatory propaganda on college campuses, and Chris Vickery showed proof of concept code to access a printer’s hard drive remotely. Our last question is what kind of security controls run on imaging servers. The answer is: not much. To be fair, vendors have started looking at this more seriously, and were reasonably responsive in patching the attacks mentioned above. But that said, these products do not get the same scrutiny as other PC devices, or even some other connected devices we will discuss below. Imaging servers see relatively minimal security assessment before coming to market. We aren’t just picking on printers here. Pretty much every intelligent peripheral is similarly vulnerable, because they all have operating systems and network stacks which can be attacked. It’s just that offices tend to have dozens of printers, which are frequently overlooked during risk assessment. Medical Devices If printers and other peripherals seem like low-value targets, let’s discuss something a bit higher-value: medical devices. In our era of increasingly connected medical devices – including monitors, pumps, pacemakers, and pretty much everything else – there hasn’t been much focus on product security, except in the few cases where external pressure is applied by regulators. These devices either have IP network stacks or can be configured via Bluetooth – neither of which is particularly well protected. The most disturbing attacks threaten patient health. There are all too many examples of security researchers compromising infusion and insulin pumps, jackpotting drug dispensaries, and even the legendary Barnaby Jack messing with a pacemaker. We know one large medical facility that took it upon itself to hack all its devices in use, and deliver a list of issues to the manufacturers. But there has been no public disclosure of results, or whether device manufacturers have made changes to make their devices safe. Despite the very real risk of medical devices being targeted to attack patient health, we believe most of the current risk involves information. User data is much easier for attackers to monetize; medical profiles have a much longer shelf-life and much higher value than typical financial information. So ensuring that Protected Health Information is adequately protected remains a key concern in healthcare. That means making sure there aren’t any leakages in these devices, which is not easy without a full penetration test. On the positive front, many of these devices have purpose-built operating systems, so they cannot really be used as pivot points for lateral movement within the network. Yet few have any embedded security controls to ensure data does not leak. Further complicating matters, some devices still use deprecated operating systems such as Windows XP and even Windows 2000 (yes, seriously), and outdated compliance mandates often mean they cannot be patched without recertification. So administrators often don’t update the devices, and hope for the best. We can all agree that hope isn’t a sufficient strategy. With lives at stake, medical device makers are starting to talk about more proactive security testing. Similarly to the way a major SaaS breach could prove an existential threat to the SaaS market, medical device makers should understand what is at risk, especially in terms of liability, but that doesn’t mean they understand how to solve the problem. So the burden lands on customers to manage their medical device inventories, and ensure they are not misused to steal data or harm patients. Industrial Control Systems The last category of shadow devices we will consider is control systems. These devices range from SCADA systems running power grids, to warehousing systems ensuring the right merchandise is

Share:
Read Post

Incite 4/27/2016: Tap the B.R.A.K.E.S.

I mentioned back in January that XX1 has gotten her driver’s permit and was in command of a two ton weapon on a regular basis. Driving with her has been, uh, interesting. I try to give her an opportunity to drive where possible, like when I have to get her to school in the morning. She can navigate the couple of miles through traffic on the way to her school. And she drives to/from her tutor as well, but that’s still largely local travel. Though I do have to say, I don’t feel like I need to run as frequently because the 15-20 minutes in the car with her gets my heart racing for the entire trip. Obviously having been driving for over 30 years, I see things as they develop in front of me. She doesn’t. So I have to squelch the urge to say, “Watch that dude over there, he’s about to change lanes.” Or “That’s a red light and that means stop, right?” Or “Hit the f***ing brakes before you hit that car, building, child, etc.” She only leveled a garbage bin once. Which caused more damage to her ego and confidence than it did to the car or the bin. So overall, it’s going well. But I’m not taking chances, and I want her to really understand how to drive. So I signed her up for the B.R.A.K.E.S. teen defensive driver training. Due to some scheduling complexity taking the class in New Jersey worked better. So we flew up last weekend and we stayed with my Dad on the Jersey Shore. First, a little digression. When you have 3 kids with crazy schedules, you don’t get a lot of individual time with any of the kids. So it was great to spend the weekend with her and I definitely got a much greater appreciation for the person she is in this moment. As we were sitting on the plane, I glanced over and she seemed so big. So grown up. I got a little choked up as I had to acknowledge how quickly time is passing. I remember bringing her home from the hospital like it was yesterday. Then we were at a family event on Saturday night with some cousins by marriage that she doesn’t know very well. To see her interact with these folks and hold a conversation and be funny and engaging and cute. I was overwhelmed with pride watching her bring light to the situation. But then it was back to business. First thing Sunday morning we went over the race track. They did the obligatory video to scare the crap out of the kids. The story of B.R.A.K.E.S. is a heartbreaking one. Doug Herbert, who is a professional drag racer, started the program after losing his two sons in a teen driving accident. So he travels around the country with a band of other professional drivers teaching teens how to handle the vehicle. The statistics are shocking. Upwards of 80% of teens will get into an accident in their first 3 years of driving. 5,000 teen driving fatalities each year. And these kids get very little training before they are put behind the wheel to figure it out. The drills for the kids are very cool. They practice accident avoidance and steering while panic breaking. They do a skid exercise to understand how to keep the car under control during a spin. They do slalom work to make sure they understand how far they can push the car and still maintain control. The parents even got to do some of the drills (which was very cool.) They also do a distracted driving drill, where the instructor messes with the kids to show them how dangerous it is to text and play with the radio when driving. They also have these very cool drunk goggles, which simulates your vision when under the influence. Hard to see how any of the kids would get behind the wheel drunk after trying to drive with those goggles on. I can’t speak highly enough about the program. I let XX1 drive back from the airport and she navigated downtown Atlanta, a high traffic situation on a 7 lane highway, and was able to avoid an accident when a knucklehead slowed down to 30 on the highway trying to switch lanes to make an exit. Her comfort behind the wheel was totally different and her skills were clearly advanced in just the four hours. If you have an opportunity to attend with your teen, don’t think about it. Just do it. Here is the schedule of upcoming trainings, and you should sign up for their mailing list. The training works. They have run 18,000 teens through the program and not one of them has had a fatal accident. That’s incredible. And important. Especially given my teen will be driving without me (or her Mom) in the car in 6 months. I want to tip the odds in my favor as much as I can. –Mike Security is changing. So is Securosis. Check out Rich’s post on how we are evolving our business. We’ve published this year’s Securosis Guide to the RSA Conference. It’s our take on the key themes of this year’s conference (which is really a proxy for the industry), as well as deep dives on cloud security, threat protection, and data security. And there is a ton of meme goodness… Check out the blog post or download the guide directly (PDF). The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. Mar 16 – The Rugged vs. SecDevOps Smackdown Feb 17 – RSA Conference – The Good, Bad and Ugly Dec

Share:
Read Post

Building a Vendor IT Risk Management Program: Ongoing Monitoring and Communication

As we mentioned last post, after you figure out what risk means to your organization, and determine the best way to quantify and rank your vendors in terms that concept of risk, you’ll need to revisit your risk assessment; because security in general, and each vendor’s environment specifically, is dynamic and constantly changing. We also need to address how to deal with vendor issues (breaches and otherwise) – both within your organization, and potentially to customers as well. Ongoing Monitoring When keeping tabs on your vendors you need to decide how often to update your assessments of their security posture. In a perfect world you’d like a continuous view of each vendor’s environment, to help you understand your risk at all times. Of course continuous monitoring costs. So part of defining a V(IT)RM program is figuring out the frequency of assessment. We believe vendors should not all be treated alike. The vendors in your critical risk tier (described in our last post) should be assessed as often as possible. Hopefully you’ll have a way (most likely through third-party services) of continually monitoring their Internet footprint, and alerting you when something changes adversely. We need to caveat that with a warning about real-time alerts. If you are not staffed to deal with real-time alerts, then getting them faster doesn’t help. In other words, if it takes you 3 days to work through your alert queue, getting an alert within an hour cannot reduce your risk much. Vendors in less risky tiers can be assessed less frequently. An annual self-assessment and a quarterly scan might be enough for them. Again, this depends on your ability to deal with issues and verify answers. If you aren’t going to look at the results, forcing a vendor to update their self-assessment quarterly is just mean, so be honest with yourself when determining the frequency for assessments. With assessment frequency determined by risk tier, what next? You’ll find adverse changes to the security posture of some vendors. The next step in the V(IT)RM program is to figure out how to deal with these issues. Taking Action You got an alert that there is an issue with a vendor, and you need to take action. But what actions can you take, considering the risk posed by the issue and the contractual agreement already in place? We cannot overstate the importance of defining acceptable actions contractually as part of your vendor onboarding process. A critical aspect of setting up and starting your program is ensuring your contracts with vendors support your desired actions when an issue arises. So what can you do? This list is pretty consistent with most other security processes: Alert: At minimum you’ll want a line of communication open with the vendor to tell them you found an issue. This is no different than an escalation during an incident response. You’ll need to assemble the information you found, and package it up for the vendor to give them as much information as practical. But you need to balance how much time you’re willing to spend helping the vendor against everything else on your to do list. Quarantine: As an interim measure, until you can figure out what happened and your best course of action, you could quarantine the vendor. That could mean a lot of things. You might segment their traffic from the rest of your network. Or scrutinize each transaction coming from them. Or analyze all egress traffic to ensure no intellectual property is leaking. The point is that you’ll need time to figure out the best course of action, and putting the vendor in a proverbial penalty box can buy you that time. This is also contingent on being able to put a boundary around a specific vendor or service provider, which may not be possible, depending on what services they provide. Cut off: There is also the kill switch, which removes vendor access from your systems and likely ends the business relationship. This is a draconian action, but sometimes a vendor presents such risk, and/or doesn’t make the changes you require, so you may not have a choice. As mentioned above, you’ll need to make sure your contract supports this action. Unless you enjoy protracted litigation. The latter two options impact the flow of business between your organization and the vendor, so you’ll need a process in place internally to determine if and when you quarantine and/or cut off a vendor. This escalation and action plan needs to be defined ahead of time. The rules of engagement, and the criteria to suspend or end a business relationship due to IT risk, need to be established ahead of time. Defined escalations ensure the internal stakeholders are in the loop as you consider flipping the kill switch. A good rule of thumb is that you don’t want to surprise anyone when a vendor goes into quarantine or is cut off from your systems. If the business decision is made to keep the vendor active in your systems (a decision made well above your pay grade), at least you’ll have documentation that the risk was accepted by the business owner. Communicating Issues Once the action plan is defined, documented, and agreed upon, you’ll want to build a communication plan. That includes defining when you’ll notify the vendor and when you’ll communicate the issue internally. As part of the vendor onboarding process you need to define points of contact with the vendor. Do they have a security team you should interface with? Is it their business operations group? You need to know before you run into an issue. You’ll also want to make sure to have an internal discussion about how much you will support the vendor as they work through any issues you find. If the vendor has an immature security team and/or program, you can easily end up doing a lot of work for them. And it’s not like you have a bunch of time to do someone else’s work, right? Of course business owners may be unsympathetic to your plight when their key vendor is cut off. That’s

Share:
Read Post

Building a Vendor IT Risk Management Program: Evaluating Vendor Risk

As we discussed in the first post in this series, whether it’s thanks to increasingly tighter business processes/operations with vendors andtrading partners, or to regulation (especially in finance) you can no longer ignore vendor risk management. So we delved into the structure and mapped out a few key aspects of a VRM program. Of course we are focused on the IT aspects of vendor management, which should be a significant component of a broader risk management approach for your environment. But that begs the question of how you can actually evaluate the risks of a vendor. What should you be worried about, and how can you gather enough information to make an objective judgement of the risk posed by every vendor? So that’s what we’ll explore in this post. Risk in the Eye of the Beholder The first aspect of evaluating vendor risk is actually defining what that risk means to your organization. Yes, that seems self-evident, but you’d be surprised how many organizations don’t document or get agreement on what presents vendor risk, and then wonder why their risk management programs never get anywhere. Sigh. All the same, as mentioned above, vendor (IT) risk is a component of a larger enterprise risk management program. So first establish the risks of working with vendors. Those risks can be broken up into a variety of buckets, including: Financial: This is about the viability of your vendors. Obviously this isn’t something you can control from an IT perspective, but if a key vendor goes belly up, that’s a bad day for your organization. So this needs to be factored in at the enterprise level, as well as considered from an IT perspective – especially as cloud services and SaaS proliferate. If your Database as a Service vendor (or any key service provider) goes away, for whatever reason, that presents risk to your organization. Operational: You contract with vendors to do something for your organization. What is the risk if they cannot meet those commitments? Or if they violate service level agreements? Again it is enterprise-level risk of the organization, but it also peeks down into the IT world. Do you pack up shop and go somewhere else if your vendor’s service is down for a day? Are your applications and/or infrastructure portable enough to even do that? Security: As security professionals this is our happy place. Or unhappy place, depending on how you feel about the challenges of securing much of anything nowadays. This gets to the risk of a vendor being hacked and losing your key data, impacting availability of your services, and/or allowing an adversary to jump access your networks and systems. Within those buckets, there are probably a hundred different aspects that present risk to your organization. After defining those buckets of risk, you need to dig into the next level and figure out not just what presents risk, but also how to evaluate and quantify that risk. What data do you need to evaluate the financial viability of a vendor? How can you assess the operational competency of vendors? And finally, what can you do to stay on top of the security risk presented by vendors? We aren’t going to tackle financial or operational risk categories, but we’ll dig into the IT security aspects below. Ask them The first hoop most vendors have to jump through is self-assessment. As a vendor to a number of larger organizations, we are very familiar with the huge Excel spreadsheet or web app built to assess our security controls. Most of the questions revolve around your organization’s policies, controls, response, and remediation capabilities. The path of least resistance for this self-assessment is usually a list of standard controls. Many organizations start with ISO 27002, COBIT, and PCI-DSS. Understand relevance is key here. For example, if a vendor is only providing your organization with nuts and bolts, their email doesn’t present a very significant risk. So you likely want a separate self-assessment tool for each risk category, as we’ll discuss below. It’s pretty easy to lie on a spreadsheet or web application. And vendors do exactly that. But you don’t have the resources to check everything, so there is a measure of trust, but verify that your need to apply here. Just remember that it’s resource-intensive to evaluate every answer, so focus on what’s important, based on the risk definitions above. External information Just a few years ago, if you wanted to assess the security risk of a vendor, you needed to either have an on-site visit or pay for a penetration test to really see what an attacker could do to partners. That required a lot of negotiation and coordination with the vendor, which meant it could only be used for your most critical vendors. And half the time they’d tell you to go pound sand, pointing to the extensive self-assessment you forced them to fill out. But now, with the introduction of external threat intelligence services, and techniques you can implement yourself, you can get a sense of what kind of security mess your vendors actually are. Here are a few types of relevant data sources: Botnets: Botnets are public by definition, because they use compromised devices to communicate with each other. So if a botnet is penetrated you can see who is connecting to it at the time, and get a pretty good idea of which organizations have compromised devices. That’s exactly how a number of services tell you that certain networks are compromised without ever looking at the networks in question. Spam: If you have a network that is blasting out a bunch of spam, that indicatives an issue. It’s straightforward to set up a number of dummy email accounts to gather spam, and see which networks are used to blast millions of messages a day. If a vendor owns one of those networks, that’s a disheartening indication of their security prowess. Stolen credentials: There are a bunch of forums where stolen credentials are traded, and if a specific vendor shows up with tons of their accounts and passwords for sale, that means their

Share:
Read Post

Incite 4/6/2016—Hindsight

When things don’t go quite as you hoped, it’s human nature to look backwards and question your decisions. If you had done something different maybe the outcome would be better. If you didn’t do the other thing, maybe you’d be in a different spot. We all do it. Some more than others. It’s almost impossible to not wonder what would have been. But you have to be careful playing Monday Morning QB. If you wallow in a situation you end up stuck in a house of pain after a decision doesn’t go well. You probably don’t have a time machine, so whatever happened is already done. All you have left is a learning opportunity to avoid making the same mistakes again. That is a key concept, and I work to learn from every situation. I want to have an idea of what I would do if I found myself in a similar situation again down the line. Sometimes this post-mortem is painful – especially when the decision you made or action you took was idiotic in hindsight. And I’ve certainly done my share of idiotic things through the years. The key to leveraging hindsight is not to get caught up in it. Learn from the situation and move on. Try not to beat yourself up over and over again about what happened. This is easy to say and very hard to do. So here is how I make sure I don’t get stuck after something doesn’t exactly meet my expectations. Be Objective: You may be responsible for what happened. If you are, own it. Don’t point fingers. Understand exactly what happened and what your actions did to contribute to the eventual outcome. Also understand that some things were going to end badly regardless of what you did, so accept that as well. Speculate on what could be different: Next take some time to think about how different actions could have produced different outcomes. You can’t be absolutely sure that a different action would work out better, but you can certainly come up with a couple scenarios and determine what you want to do if you are in that situation again. It’s like a game where you can choose different paths. Understand you’ll be wrong: Understand that even if you evaluate 10 different options for a scenario, next time around there will be something you can’t anticipate. Understand that you are dealing with speculation, and that’s always dicey. Don’t judge yourself: At this point you have done what you can do. You owned your part in however the situation ended up. You figured out what you’ll do differently next time. It’s over, so let it go and move forward. You learned what you needed, and that’s all you can ask for. That’s really the point. Fixating on what’s already happened closes off future potential. If you are always looking behind you, you can neither appreciate nor take advantage of what’s ahead. This was a hard lesson for me. I did the same stuff for years, and was confused because nothing changed. It took me a long time to figure out what needed to change, which of course turned out to be me. But it wasn’t wasted time. I’m grateful for all my experiences, especially the challenges. I’ve had plenty of opportunities to learn, and will continue to screw things up and learn more. I know myself much better now and understand that I need to keep moving forward. So that’s what I do. Every single day. –Mike Photo credit: “Hindsight” from The.Rohit Security is changing. So is Securosis. Check out Rich’s post on how we are evolving our business. We’ve published this year’s Securosis Guide to the RSA Conference. It’s our take on the key themes of this year’s conference (which is really a proxy for the industry), as well as deep dives on cloud security, threat protection, and data security. And there is a ton of meme goodness… Check out the blog post or download the guide directly (PDF). The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. Mar 16 – The Rugged vs. SecDevOps Smackdown Feb 17 – RSA Conference – The Good, Bad and Ugly Dec 8 – 2015 Wrap Up and 2016 Non-Predictions Nov 16 – The Blame Game Nov 3 – Get Your Marshmallows Oct 19 – re:Invent Yourself (or else) Aug 12 – Karma July 13 – Living with the OPM Hack May 26 – We Don’t Know Sh–. You Don’t Know Sh– May 4 – RSAC wrap-up. Same as it ever was. March 31 – Using RSA March 16 – Cyber Cash Cow March 2 – Cyber vs. Terror (yeah, we went there) February 16 – Cyber!!! February 9 – It’s Not My Fault! Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Resilient Cloud Network Architectures [Design Patterns] [Fundamentals] Shadow Devices The Exponentially Expanding Attack Surface Building a Vendor IT Risk Management Program Program Structure Understanding Vendor IT Risk SIEM Kung Fu Getting Started and Sustaining Value Advanced Use Cases Fundamentals Recently Published Papers Securing Hadoop Threat Detection Evolution Building Security into DevOps Pragmatic Security for Cloud and Hybrid Networks EMV Migration and the Changing Payments Landscape Applied Threat Intelligence Endpoint Defense: Essential Practices Cracking the Confusion: Encryption & Tokenization for Data Centers, Servers & Applications Monitoring the Hybrid Cloud Best Practices for AWS Security The Future of Security Incite 4 U Still no free lunch, even if it’s fake: Troy Hunt’s post is awesome, digging into how slimy free websites gather

Share:
Read Post

Incite 3/30/2016: Rational People Disagree

It’s definitely a presidential election year here in the US. My Twitter and Facebook feeds are overwhelmed with links about what this politician said and who that one offended. We get to learn how a 70-year old politician got arrested in his 20s and why that matters now. You also get to understand that there are a lot of different perspectives, many of which make absolutely no sense to you. Confirmation bias kicks into high gear, because when you see something you don’t agree with, you instinctively ignore it, or have a million reasons why dead wrong. I know mine does. Some of my friends frequently share news about their chosen candidates, and even more link to critical information about the enemy. I’m not sure whether they do this to make themselves feel good, to commiserate with people who think just like them, or in an effort to influence folks who don’t. I have to say this can be remarkably irritating because nothing any of these people posts is going to sway my fundamental beliefs. That got me thinking about one of my rules for dealing with people. I don’t talk about religion or politics. Unless I’m asked. And depending on the person I might not engage even if asked. Simply because nothing I say is going to change someone’s story regarding either of those two third rails of friendship. I will admit to scratching my head at some of the stuff people I know post to social media. I wonder if they really believe that stuff, or they are just trolling everyone. But at the end of the day, everyone is entitled to their opinion, and it’s not my place to tell them their opinion is idiotic. Even if to it is. I try very hard not to judge people based on their stories and beliefs. They have different experiences and priorities than me, and that results in different viewpoints. But not judging gets pretty hard between March and November every 4 years. At least 4 or 5 times a day I click the unfollow link when something particularly offensive (to me) shows up in my feed. But I don’t hit the button to actually unfollow someone. I use the fact that I was triggered by someone as an opportunity to pause and reflect on why that specific headline, post, link, or opinion bothers me so much. Most of the time it’s just exhaustion. If I see one more thing about a huge fence or bringing manufacturing jobs back to the US, I’m going to scream. I get these are real issues which warrant discussion. But in a world with a 24/7 media cycle, the discussion never ends. I’m not close-minded, although it may seem that way. I’m certainly open to listening to other candidates’ views, mostly to understand the other side of the discussion and continually refine and confirm my own positions. But I have some fundamental beliefs that will not change. And no, I’m not going to share them here (that third rail again!). I know that rational people can disagree, and that doesn’t mean I don’t respect them, or that I don’t want to work together or hang out and drink beer. It just means I don’t want to talk about religion or politics. –Mike Photo credit: “Laugh-Out-Loud Cats #2204” from Ape Lad Security is changing. So is Securosis. Check out Rich’s post on how we are evolving our business. We’ve published this year’s Securosis Guide to the RSA Conference. It’s our take on the key themes of this year’s conference (which is really a proxy for the industry), as well as deep dives on cloud security, threat protection, and data security. And there is a ton of meme goodness… Check out the blog post or download the guide directly (PDF). The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. Mar 16 – The Rugged vs. SecDevOps Smackdown Feb 17 – RSA Conference – The Good, Bad and Ugly Dec 8 – 2015 Wrap Up and 2016 Non-Predictions Nov 16 – The Blame Game Nov 3 – Get Your Marshmallows Oct 19 – re:Invent Yourself (or else) Aug 12 – Karma July 13 – Living with the OPM Hack May 26 – We Don’t Know Sh–. You Don’t Know Sh– May 4 – RSAC wrap-up. Same as it ever was. March 31 – Using RSA March 16 – Cyber Cash Cow March 2 – Cyber vs. Terror (yeah, we went there) February 16 – Cyber!!! February 9 – It’s Not My Fault! Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Resilient Cloud Network Architectures Design Patterns Fundamentals Shadow Devices The Exponentially Expanding Attack Surface Building a Vendor IT Risk Management Program Program Structure Understanding Vendor IT Risk Securing Hadoop Architectural Security Issues Architecture and Composition Security Recommendations for NoSQL platforms SIEM Kung Fu Getting Started and Sustaining Value Advanced Use Cases Fundamentals Recently Published Papers Threat Detection Evolution Building Security into DevOps Pragmatic Security for Cloud and Hybrid Networks EMV Migration and the Changing Payments Landscape Applied Threat Intelligence Endpoint Defense: Essential Practices Cracking the Confusion: Encryption & Tokenization for Data Centers, Servers & Applications Security and Privacy on the Encrypted Network Monitoring the Hybrid Cloud Best Practices for AWS Security The Future of Security Incite 4 U That depends on your definition of consolidation: Stiennon busts out his trusty spreadsheet of security companies and concludes that the IT security industry is not consolidating. He has numbers. Numbers! That prove there is a

Share:
Read Post

Resilient Cloud Network Architectures: Design Patterns

We introduced resilient cloud networks in this series’ first post. We define them as networks using cloud-specific features to provide both stronger security and higher availability for your applications. This post will dig into two different design patterns, and show how cloud networking enables higher resilience. Network Segregation by Default Before we dive into design patterns let’s make sure we are all clear on using network segmentation to improve your security posture, as discussed in our first post. We know segmentation isn’t novel, but it is still difficult in a traditional data center. Infrastructure running different applications gets intermingled, just to efficiently use existing hardware. Even in a totally virtualized data center, segmentation requires significant overhead and management to keep all applications logically isolated – which makes it rare. What is the downside of not segmenting properly? It offers adversaries a clear path to your most important stuff. They can compromise one application and then move deeper into your environment, accessing resources not associated with the application stack they first compromised. So if they bust one application, there is a high likelihood they’ll end up with free rein over everything in the data center. The cloud is different. Each server in a cloud environment is associated with a security group, which defines with very fine granularity which other devices it can communicate with, and over what protocols. This effectively enables you to contain an adversary’s ability to move within your environment, even after compromising a server or application. This concept is often called limiting blast radius. So if one part of your cloud environment goes boom, the rest of your infrastructure is unaffected. This is a key concept in cloud network architecture, highlighted in the design patterns below. PaaS Air Gap To demonstrate a more secure cloud network architecture, consider an Internet-facing application with both web server and application server tiers. Due to the nature of the application, communications between the two layers are through message queues and notifications, so the web servers don’t need to communicate directly with each other. The application server tier connects to the database (a Platform as a Service offering from the cloud provider). The application server tier also communicates with a traditional data center to access other internal corporate data outside the cloud environment. An application must be architected for the get-go to support this design. You aren’t going to redeploy your 20-year-old legacy general ledger application to this design. But if you are architecting a new application, or can rearchitect existing applications, and want total isolation between environments, this is one way to do it. Let’s describe the design. Network Security Groups The key security control typically used in this architecture is a Network Security Group, allowing access to the app servers only from the web servers, and only to the specific port and protocol required. This isolation limits blast radius. To be clear, the NSG is applied individually to each instance – not to subnets. This avoids a flat network, where all instances within a subnet have unrestricted access to all subnet peers. PaaS Services In this application you wouldn’t open access from the web server NSG to the app server NSG, because the architecture doesn’t require direct communication between web servers and app servers. Instead the cloud provider offers a message queue platform and notification service which provide asynchronous communication between the web and application tiers. So even if the web servers are compromised, the app servers are not accessible. Further isolation is offered by a PaaS database, also offered by the cloud service provider. You can restrict requests to the PaaS DB to specific Network Security Groups. This ensures only the right instances can request information from the database service, and all requests are authorized. Connection to the Data Center The application may require data from the data center, so the app servers have access to the needed data through a VPN. You route all traffic to the data center through this inspection and control point. Typically it’s better not to route cloud traffic through inspection bottlenecks, but in this design pattern it’s not a big deal, because the traffic needs to pass over a specific egress connection to the data center, so you might as well inspect there as well. You ensure ingress traffic over that connection can only go to the app server security group. This ensures that an adversary who compromises your network cannot access your whole cloud network by bouncing through your data center. Advantages of This Design Isolation between Web and App Servers: By putting the auto-scaling groups in a Network Security Groups, you restrict their access to everything. No Direct Connection: In this design pattern you can block direct traffic to the application servers from anywhere but the VPN. Intra-application traffic is asynchronous via the message queue and notification service, so isolation is complete. PaaS Service: This architecture uses cloud provider services, with strong built-in security and resilience. Cloud providers understand that security and availability are core to their business. What’s next for this kind of architecture? To advance this architecture you could deploy mirrors of the application in different zones within a region to limit the blast radius in case one device is compromised, and to provide additional resiliency in case of a zone failure. Additionally, if you use immutable servers within each auto-scale group, you can update/patch/reconfigure instances automatically by changing the master image and having auto-scaling replace the old instances with new ones. This limits configuration drift and adversary persistence. Multi-Region Web Site This architecture was designed to deploy a website in multiple regions, with availability as close to 100% as possible. This design is far more expensive than even running in multiple zones within a single region, because you need to pay for network traffic between regions (compared to free intra-region traffic); but if uptime is essential to your business, this architecture improves resiliency. This is an externally facing application so you run traffic through a cloud WAF to get rid of obvious attack traffic. Inbound sessions can

Share:
Read Post

Resilient Cloud Network Architectures: Fundamentals

As much as we like to believe we have evolved as a species, people continue to be scared of things they don’t understand. Yes, many organizations have embraced the cloud whole hog and are rushing headlong into the cloud age. But it’s a big world, and millions of others remain paralyzed – not really understanding cloud computing, and taking the general approach that it can’t be secure because, well, it just can’t. Or it’s too new. Or some for other unfounded and incorrect reason. Kind of like when folks insisted that the Earth was the center of the universe. This blog series builds on our recent Pragmatic Security for Cloud and Hybrid Networks paper, focusing on cloud-native network architectures that provide security and availability in ways you cannot accomplish in a traditional data center. This evolution will take place over the next decade, and organizations will need to support hybrid networks for some time. But for those ready, willing, and able to step forward into the future today, the cloud is waiting to break the traditional rules of how technology has been developed, deployed, scaled, and managed. We have been aggressive in proselytizing our belief that the move towards the cloud is the single biggest disruption in technology for the next few decades. Yes, even bigger than the move from mainframes to client/server (we’re old – we know). So our Resilient Cloud Network Architectures series will provide the basics of cloud network security, with a few design patterns to illustrate. We would like to thank Resilient Systems for provisionally agreeing to license the content in this paper. As always, we’ll build the content using our Totally Transparent Research methodology, mean we will post everything to the blog first, and allow you (our readers) to poke holes in it. Once it has been sufficiently prodded, we will publish a paper for your reference. Defining Resilient If we bust out the old dictionary to define resilient, we get: able to become strong, healthy, or successful again after something bad happens able to return to an original shape after being pulled, stretched, pressed, bent, etc. In the context of computing, you want to deploy technology that can not just become strong again, but resist attack in the first place. Recoverability is also key: if something bad happens you want to return service quickly, if it causes an outage at all. For network architecture we always fall back on the cloud computing credo: Design for failure. A resilient network architecture both makes it harder to compromise an application and minimizes downtime in case of an issue. Key aspects of cloud computing which provide security and availability include: Network Isolation: Using the inherent ability of the cloud to restrict connections (via software firewalls, which are called security groups and described below), you can build a network architecture that fully isolates the different tiers of an application stack. That prevents a compromise in one application (or database) from leaking or attacking information stored in another. Account Isolation: Another important feature of the cloud is the ability to use multiple accounts per application. Each of your different environments (Dev, Test, Production, Logging, etc.) can use different accounts, which provides valuable isolation because you cannot access cloud infrastructure across accounts without explicit authorization. Immutability: An immutable server is one that is never logged into or changed in production. In cloud-native DevOps environments servers are deployed in auto-scale groups based on standard images. This prevents human error and configuration drift from creating exploitation paths. You take a new known-good state, and completely replace older images in production. No more patching and no more logging into servers. Regions: You could build multiple data centers around the world to provide redundancy. But that’s not a cheap option, and rarely feasible. To do the same thing in the cloud, you basically just replicate an entire environment in a different region via an API call or a couple clicks in a cloud console. Regions are available all over the world, with multiple availability zones within each, to further minimize single points of failure. You can load balance between zones and regions, leveraging auto-scaling to keep your infrastructure running the same images in real time. We will explain this design pattern in our next post. The key takeaway is that cloud computing provides architectural options which are either impossible or economically infeasible in a traditional data center, to provide greater protection and availability. This series we will describe the fundamentals of cloud networking for context, and then dig into design patterns which provide both security and availability – which we define as resilience. Understanding Cloud Networks The key difference between a network in your data center and one in the cloud is that cloud customers never access the ‘real’ network or hardware. Cloud computing uses virtual networks to abstract the networks you see and manage from the (invisible) underlying physical resources. When your server gets IP address 10.0.1.12, that IP address does not exist on routing hardware – it’s a virtual address on a virtual network. Everything is handled in software. Cloud networking varies across cloud providers, but differs from traditional networks in visibility, management, and velocity of change. You cannot tap into a cloud provider’s virtual network, so you’ll need to think differently to monitor your networks. Additionally, cloud networks are typically managed via scripts or programs, making Application Programming Interfaces (API) calls, rather than a graphical console or command line. That enables developers to do pretty much anything, including standing up networks and reconfiguring them – instantly via code. Finally, cloud networks change much faster than physical networks because cloud environments change faster, including spinning up and shutting down servers via automation. So traditional workflows to govern network change don’t really map to your cloud network. It can be confusing because cloud networks look like traditional networks, with their own routing tables and firewalls. But looks are deceiving – although familiar constructs have been carried over, there are fundamental differences. Cloud Network Architectures In order to choose the right solution to address your requirements, you need to understand the types of cloud network

Share:
Read Post

Shadow Devices: The Exponentially Expanding Attack Surface [New Series]

One of the challenges of being security professionals for decades is that we actually remember the olden days. You remember, when Internet-connected devices were PCs; then we got fancy and started issuing laptops. That’s what was connected to our networks. If you recall, life was simpler then. But we don’t have much time for nostalgia. We are too busy getting a handle on the explosion of devices connected to our networks, accessing our data. Here is just a smattering of what we see: Mobile devices: Supporting smartphones and tablets seems like old news, mostly because you can’t remember a time when they weren’t on your network. But despite their short history, their impact on mobile networking and security cannot be understated. What’s more challenging is how these devices can connect directly to the cellular data network, which gives them a path around your security controls. BYOD: Then someone decided it would be cheaper to have employees use their own devices, and Bring Your Own Device (BYOD) became a thing. You can have employees sign paperwork giving you the ability to control their devices and install software, but in practice they get (justifiably) very cranky when they cannot do something on their personal devices. So balancing the need to protect corporate data against antagonizing employees has been challenging. Other office devices: Printers and scanners have been networked for years. But as more sophisticated imaging devices emerged, we realized their on-board computers and storage were insecure. They became targets, attacker beachheads. Physical security devices: The new generation of physical security devices (cameras, access card readers, etc.) is largely network connected. It’s great that you can grant access to a locked-out employee, from your iPhone on the golf course, but much less fun when attackers grant themselves access. Control systems and manufacturing equipment: The connected revolution has made its way to shop floors and facilities areas as well. Whether it’s a sensor collecting information from factory robots or warehousing systems, these devices are networked too, so they can be attacked. You may have heard of StuxNet targeting centrifuge control systems. Yep, that’s what we’re talking about. Healthcare devices: If you go into any healthcare facility nowadays, monitoring devices and even some treatment devices are managed through network connections. There are jokes to be made about taking over shop floor robots and who cares. But if medical devices are attacked, the ramifications are significantly more severe. Connected home: Whether it’s a thermostat, security system, or home automation platform – the expectation is that you will manage it from wherever you are. That means a network connection and access to the Intertubes. What could possibly go wrong? Cars: Automobiles can now use either your smartphone connection or their own cellular link to connect to the Internet for traffic, music, news, and other services. They can transmit diagnostic information as well. All cool and shiny, but recent stunt hacking has proven a moving automobile can be attacked and controlled remotely. Again, what’s to worry? There will be billions of devices connected to the Internet over the next few years. They all present attack surface. And you cannot fully know what is exploitable in your environment, because you don’t know about all your devices. The industry wants to dump all these devices into a generic Internet of Things (IoT) bucket because IoT is the buzzword du jour. The latest Chicken Little poised to bring down the sky. It turns out the sky has already fallen – networks are already too vast to fully protect. The problem is getting worse by the day as pretty much anything with a chip in it gets networked. So instead of a manageable environment, you need to protect Everything Internet. Anything with a network address can be attacked. Fortunately better fundamental architectures (especially for mobile devices) make it harder to compromise new devices than traditional PCs (whew!), but sophisticated attackers don’t seem to have trouble compromising any device they can reach. And that says nothing of devices whose vendors have paid little or no attention to security to date. Healthcare and control system vendors, we’re looking at you! They have porous defenses, if any, and once an attacker gains presence on the network, they have a bridgehead to work their way to their real targets. In the Shadows So what? You don’t even have medical devices or control systems – why would you care? The sad fact is that what you don’t see can hurt you. Your entire security program has been built to protect what you can see with traditional discovery and scanning technologies. The industry has maintained a very limited concept of what you should be looking for – largely because that’s all security scanners could see. The current state of affairs is you run scans every so often and see new devices emerge. You test them for configuration issues and vulnerabilities, and then you add those issues to the end of an endless list of things you’ll never have time to finish with. Unfortunately visible devices are only a portion of the network-connected devices in your environment. There are hundreds if not thousands or more other devices you don’t know about on your network. You don’t scan them periodically, and you have no idea about their security posture. Each of thm can be attacked, and may provide an adversary a presence in your environment. Your attack surface is much larger than you thought. These shadow devices are infrequently discussed, and rarely factored into discovery and protection programs. It’s a big Don’t Ask, Don’t Tell approach, which never seems to work out well in the end. We haven’t yet published anything on IoT devices (or Everything Internet), but it’s time. Not because we currently see many attacks in the wild. But most organizations we talk to are unprepared for when an attack happens, so they will scramble – as usual. We have espoused a visibility, then control approach to security for over a decade. Now it’s time to get a handle on the visibility of all devices on your network, so when you need to, you will know what you have to control. And how to control

Share:
Read Post

Incite 3/23/2016: The Madness

I’m not sure why I do it, but every year I fill out brackets for the annual NCAA Men’s College basketball tournament. Over all the years I have been doing brackets, I won once. And it wasn’t a huge pool. It was a small pool in my office, when I used to work in an office, so the winnings probably didn’t even amount to a decent dinner at Fuddrucker’s. I won’t add up all my spending or compare against my winning, because I don’t need a PhD in Math to determine that I am way below the waterline. Like anyone who always questions everything, I should be asking myself why I continue to play. I’m not going to win – I don’t even follow NCAA basketball. I’d have better luck throwing darts at the wall. So clearly it’s not a money-making endeavor. I guess I could ask the same question about why I sit in front of a Wheel of Fortune slot machine in a casino. Or why I buy PowerBall tickets when the pot goes above $200MM. I understand statistics – I know I’m not going to win slots (over time) or the lottery (ever). They call the NCAA tournament March Madness – perhaps because most people get mad when their brackets blow up on the second day of the tournament when the team they picked to win it all loses to a 15 seed. Or does that just happen to me? But I wasn’t mad. I laughed because 25% of all brackets had Michigan State winning the tournament. And they were all as busted as mine. These are rhetorical questions. I play a few NCAA tournament brackets every year because it’s fun. I get to talk smack to college buddies about their idiotic picks. I play the slots because my heart races when I spin the wheel and see if I got 35 points or 1,000. I play the lottery because it gives me a chance to dream. What would I do with $200MM? I’d do the same thing I’m doing now. I’d write. I’d sit in Starbucks, drink coffee, and people-watch, while pretending to write. I’d speak in front of crowds. I’d explore and travel with my loved ones. I’d still play the brackets, because any excuse to talk smack to my buddies is worth the minimal donation. And I’d still play the lottery. And no, I’m not certifiable. I just know from statistics that I wouldn’t have any less chance to win again just because I won before. Score 1 for Math. –Mike Photo credit: “Now, that is a bracket!” from frankieleon We’ve published this year’s Securosis Guide to the RSA Conference. It’s our take on the key themes of this year’s conference (which is really a proxy for the industry), as well as deep dives on cloud security, threat protection, and data security. And there is a ton of meme goodness… Check out the blog post or download the guide directly (PDF). The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. Mar 16 – The Rugged vs. SecDevOps Smackdown Feb 17 – RSA Conference – The Good, Bad and Ugly Dec 8 – 2015 Wrap Up and 2016 Non-Predictions Nov 16 – The Blame Game Nov 3 – Get Your Marshmallows Oct 19 – re:Invent Yourself (or else) Aug 12 – Karma July 13 – Living with the OPM Hack May 26 – We Don’t Know Sh–. You Don’t Know Sh– May 4 – RSAC wrap-up. Same as it ever was. March 31 – Using RSA March 16 – Cyber Cash Cow March 2 – Cyber vs. Terror (yeah, we went there) February 16 – Cyber!!! February 9 – It’s Not My Fault! Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Shadow Devices The Exponentially Expanding Attack Surface Building a Vendor IT Risk Management Program Program Structure Understanding Vendor IT Risk Securing Hadoop Architectural Security Issues Architecture and Composition Security Recommendations for NoSQL platforms SIEM Kung Fu Getting Started and Sustaining Value Advanced Use Cases Fundamentals Building a Threat Intelligence Program Success and Sharing Using TI Gathering TI Introduction Recently Published Papers Threat Detection Evolution Building Security into DevOps Pragmatic Security for Cloud and Hybrid Networks EMV Migration and the Changing Payments Landscape Applied Threat Intelligence Endpoint Defense: Essential Practices Cracking the Confusion: Encryption & Tokenization for Data Centers, Servers & Applications Security and Privacy on the Encrypted Network Monitoring the Hybrid Cloud Best Practices for AWS Security The Future of Security Incite 4 U Enough already: Encryption is a safeguard for data. It helps ensure data is used the way its owner intends. We work with a lot of firms – helping them protect data from rogue employees, hackers, malicious government entities, and whoever else may want to misuse their data. We try to avoid touching political topics on this blog, but the current attempt by US Government agencies to paint encryption as a terrorist tool is beyond absurd. They are effectively saying security is a danger, and that has really struck a nerve in the security community. Forget for a minute that the NSA already has all the data that moves on and off your cellphone, and that law enforcement already has the means to access the contents of iPhones without Apple’s assistance. And avoid wallowing in counter-examples where encryption aided freedom, or illustrations of misuse of power to inspire fear in the opposite direction. These arguments devolve into pig-wrestling – only the pig enjoys that sort of thing. As Rich explained in Do We Have

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.