Securosis

Research

Ecosystem Threat Intelligence: Use Cases and Selection Criteria

We touched on the Risks of the Extended Enterprise and the specifics of Assessing Partner Risk, so now let’s apply these concepts to a few use cases to help make the concepts a little more tangible. We will follow a similar format for each use case, talking about the business needs for access, then the threat presented by that access, and finally how Ecosystem Threat Intelligence (EcoTI) helps you make better decisions about specific partners. Without further ado, let’s jump in. Simple Business Process Outsourcing Use Case Let’s start simply. As with many businesses, sometimes it is cheaper and better to have external parties fulfill non-strategic functions. We could be talking about anything, from legacy application maintenance to human resources form processing. But almost all outsourcing arrangements require you to provide outsiders with access to your systems so they can use some of your critical data. For any kind of software development an external party needs access to your source code. And unless you have a very advanced and segmented development network, developers have access to much more than just the (legacy) applications they are working on. So if any of their devices are compromised, attackers can gain access to your developer’s devices and build systems, and a variety of other things that would probably be bad. If we are talking about human resources outsourcing, those folks have access to personnel records, which may include sensitive information including salaries, employment agreements, health issues, and other stuff you probably don’t want published on Glassdoor. Even better, organizations increasingly use SaaS providers for HR functions, which moves that data outside your data center and removes even more of your waning control. The commonality between these two outsourcing situations is that access is restricted to just one trading partner. Of course you might use multiple development shops, but for simplicity’s sake we will just talk about one. In this case your due diligence occurs while selecting the provider and negotiating the contract. That may entail demanding background checks on external staffers and a site visit to substantiate sufficient security controls. At that point you should feel pretty good about the security of your trading partner. But what happens after that? Do you assess these folks on an ongoing basis? What happens if they hire a bad apple? Or if they are attacked and compromised due to some other issue that has nothing to do with you? Thus, the importance of an ongoing assessment capability. If you are a major client of your outsourcer you might have a big enough stick to get them to share their network topology. So at least you won’t have to build that yourself. In this scenario, you are predominately concerned with bot activity (described as Sickness from Within in our previuos Risk Assessment post) because that’s the smoking gun for compromised devices with access. Compromised Internet-facing devices can also cause issues so you need to consider them too. But as you can see, in this use case it makes sense to prioritize internal issues over the public-facing vulnerabilities when you calculate a relative risk score. In this limited scenario it is not really a relative risk score, because you aren’t really comparing the provider to anyone else, because only one external party has access any particular dataset. So if your Ecosystem Threat Intelligence alerts you to an issue with this partner you will need to act quickly. Their access could cause you real problems. Many Partners Use Case To complicate things a bit let’s consider that you may need to provide access to many trading partners. Perhaps external sales reps have access to your customer database and other proprietary information about your products and services. Or perhaps your chain of hospitals provides access to medical systems to hundreds of doctors with privileges to practice at your facilities. Or it could even be upstream suppliers who make and assemble parts for your heavy machinery products. These folks have your designs and sales forecasts, because they need to deliver inventory just in time for you to get the product out the door (and hit your quarterly numbers). Regardless of the situation, you have to support dozens of trading partners or more, offering them access to some of your most critical enterprise data. Sometimes it’s easier for targeted attackers to go after your trading partners, than to target you directly. We have seen this in the real world, with subassembly manufacturers of defense contractors hacked for access to military schematics and other critical information on a particular weapons program. In this situation, as in the use case above, the security team typically cannot refuse to connect with the partner. Sales executives frown on the security team shutting down a huge sales channel. Similarly like the security team cannot tell the final assembly folks they can’t get their seats because the seat manufacturer got breached. Although you can’t stop the business, you can certainly warn the senior team about the risks of connecting with that specific trading partner. But to substantiate those concerns, you need data to back up your claim. This is where calculating relative risk scores for multiple trading partners can really help make your case. It’s probably not a bad assumption that all trading partners are compromised in some fashion. But which ones are total fiascos? Which partners cannot even block a SQLi attack on an ecommerce site? Which has dozens of bots flooding the Internet with denial of service attacks? Specifics from your Ecosystem Threat Intel efforts enable you to make a fact-based case to senior executives that connecting to a partner is not worth the risk. Again, you can’t make the business decision for that executive, but you can arm them with enough information for them to make a rational decision. Or you could suggest an alternative set of security controls for those specific partners. You might force them to connect into your systems through a VDI (virtual desktop) service on your premises (so your data never leaves your network) and monitor everything they do in

Share:
Read Post

Random Thought: Meet Your New Database

Something has been bugging me. It’s big data. Not the industry but the term itself. Every time I am asked about big data I need to use the term in order to be understood, but the term itself steers the uninitiated in the wrong direction. It leaves a bad taste in my mouth. It’s wrong. It’s time to stop thinking about big data as big data, and start looking at these platforms as the next logical step in data management. What we call “big data” is really a building block approach to databases. Rather than the pre-packaged relational systems we have grown accustomed to over the last two decades, we now assemble different pieces (data management, data storage, orchestration, etc.) together in order to fit specific requirements. These platforms, in dozens of different flavors, have more than proven their worth and no longer need to escape the shadow of relational platforms. It’s time to simply think of big data as modular databases. Big data has had something a chip on its shoulder, with proponents calling the movement ‘NoSQL’ to differentiate these platforms from relational databases. The term “big data” was used to describe this segment, but as it captures only one – and not even the most important – characteristic, the term now under-serves the entire movement. These databases may focus on speed, size, analytic capabilities, failsafe operation, or some other goal, and they allow computation on a massive scale for a very small amount of money. But just as importantly, they are fully customizable to meet different needs. And they work! This is not a fad. It is are not going away. It is not always easy to describe what these modular databases look like, as they are as variable as the applications that use them, but they have a set of common characteristics. Hopefully this post will not trigger any “relational databases are dead” comments. Mainframe databases are still alive and thriving, and relational databases have a dominant market position that is not about to evaporate either. But when you start a new project, you are probably not looking at a relational database management system. Programmers simply need more flexibility in how they manage and use data, and relational platforms simply do not provide the flexibility to accommodate all the diverse needs out there. Big data is a database, and I bet within the next couple years when we say ‘database’ we won’t think relational – we will mean big data modular databases. Share:

Share:
Read Post

VMWare Doubles Down on SDN

VMWare is pushing hard on the virtual datacenter concept this week at VMWorld, with the first release of their new SDN networking approach based on the Nicira acquisition. Greg Ferro has a good take (hat tip to @beaker/Hoff for the link): VMware NSX is a solution for programmable and dynamic networking service that interoperates with VMware vCloud director, OpenStack or Hyper-V–this is where the real value is derived. In the near future, servers will no longer be “operating systems” but “application containers.” Instead of installing an application onto a operating system, the application will part of a service template that will do most or all of these: Three things: I don’t think it is a game changer itself, but it is a (sort of new) entry by a major player into an area of growing interest. It will certainly create a lot more dialogue. Oh crap, now I need to brush up on networking again. And you networking types need to brush up on programming and APIs. SDN coupled with the cloud can enable seriously cool security capabilities. Like a couple API calls to identify every server on every network segment, every path to said servers, and all the firewall rules around them. In real time. Share:

Share:
Read Post

Ecosystem Threat Intelligence: Assessing Partner Risk

As we discussed in the introduction post of our Ecosystem Threat Intelligence series, today’s business environment features increasing use of an extended enterprise. Integrating systems and processes with trading partners can benefit the business, but dramatically expands the attack surface. A compromised trading partner, with trusted access to your network and systems, gives their attackers that same trusted access to you. To net out the situation, you need to assess the security of your partner ecosystem; and be in a position to make risk-based decisions about whether the connection (collaboration) with trading partners makes sense, and what types of controls are necessary for protection given the potential exposure. To quote our first post: You need to do your due diligence to understand how each organization accessing your network increases your attack surface. You need a clear understanding of how much risk each of your trading partners presents. So you need to assess each partner and receive a notification of any issues which appear to put your networks at risk. This post will discuss how to assess your ecosystem risks, and then how to quantify the risks of partners for better (more accurate) decisions about the levels of access and protection appropriate for them. When assessing risks to your ecosystem, penetration tests or even vulnerability scans across all your partners are rarely practical. You certainly can try (and for some very high-profile partners with extensive access to your stuff you probably should), but you need a lower-touch way to perform ongoing assessments of the vast majority of your trading partners. As with many other aspects of security, a leveraged means of collecting and analyzing threat intelligence on partners can identify areas of concern and help you determine whether and when to go deeper and to perform active testing with specific partners. Breach History Investors say past performance isn’t a good indicator of future results. Au contraire – in the security business, if an organization has been compromised a number of times, they are considerably more likely to be compromised in the future. Some organizations use publicly disclosed data loss as a catalyst to dramatically improve their security posture… but most don’t. There are various sources for breach information, and consulting a few to confirm the accuracy of a breach report is a good idea. Besides the breach disclosure databases, depending on your industry you might have an ISAC (Information Sharing and Analysis Center) with information about breaches as well. Although there are some limitations in this approach. First of all, many of the public breach reporting databases are volunteer-driven and can be a bit delayed in listing the latest breaches, mostly because the volume of publicly disclosed breaches continues to skyrocket. Some organizations (think military and other governmental organizations) don’t disclose their breaches, so there won’t be public information about those organizations. And others play disclosure games about what is material and what isn’t. Thus checking out public disclosures is not going to be comprehensive, but it’s certainly a place to start. Mapping Your Ecosystem The next step is to figure out whether the partner has current active security issues, which may or may not lead to data loss. The first step will be to associate devices and IP addresses with specific trading partners, because to understand a partner’s security posture you need an idea of their attack surface. If you have the proverbial “big bat” with a partners – meaning you do a lot of business with them and they have significant incentive to keep you happy – you can ask them for this information. They may share it, or perhaps they won’t – not necessarily because they don’t want to. It is very hard to keep this information accurate and current – they may not have an up-to-date topology. If you can’t get it from your partner you will need to build it yourself. That involves mining DNS and whois among other network mapping tactics, and is resource intensive. Again, this isn’t brain surgery, but if you have dozens (or more) trading partners it can be a substantial investment. Alternatively you might look to a threat intelligence service specializing in third party assessment, which has developed such a map as a core part of their offering. We will talk more about this option under Quick Wins in our next post. Another question on network mapping: how deep and comprehensive does the map need to be. Do you need to know every single network in use within a Global 2000 enterprise? Because that would be a large number of networks to track. To really understand a partner’s security posture you should develop as comprehensive a viewpoint as you can, within realistic constraints on time and investment. Start with specific locations that have access to your networks, and be sure to understand the difference between owning a network and actually using it. Many organizations have large numbers of networks, but use very few of them. Public Malaise Now that you have a map associating networks with trading partners, you can start analyzing security issues on networks you know belong to trading partners. Start with Internet-accessible networks and devices – mostly because you can get there. You don’t need permission to check out a partner’s Internet-facing devices. In-depth scanning and recon on those devices is bad form, but hopefully attackers aren’t doing that every day, right? If you find an issue that is a good indication of a lack of security discipline. Especially if the vulnerability is simple. If your partner can’t protect stuff that is obviously be under attack (Internet-facing devices), they probably don’t do a good job with other security. Yes, that is a coarse generalization, but issues on public devices fail the sniff test for an organization with good security practices. So where can you get this information? Several data sources are available: Public website malware checking: There are services that check for malware on websites – mostly by rendering pages automatically on vulnerable devices and seeing whether bad stuff happens. Often a trading partner will buy these services themselves

Share:
Read Post

China Suffers Large DNS DDoS Attack

From the Wall Street Journal (via The Verge): The attack began at 2 a.m. Sunday morning and was followed by a more intense attack at 4 a.m., according to the China Internet Network Information Center, which apologized to affected users in its statement and said it is working to improve its “service capabilities.” The attack, which was aimed at the registry that allows users to access sites with the extension “.cn,” likely shut down the registry for about two to four hours, according to CloudFlare No idea on the motivation yet, which is interesting. China has one of the most sophisticated filtering systems in the world and analysts rate highly the government’s ability to carry out cyber attacks. Despite this, China is not capable of defending itself from an attack, which CloudFlare says could have been carried out by a single individual. Dear mass media, offense isn’t defense. They come out of different budgets with different motivations. China has IT silos just like we do, get over it. Share:

Share:
Read Post

Friday Summary: August 23, 2013

With seven trips in the last eight weeks – and I would have been 8 for 8 had I not been sick one week – I’d have been out of the office the entire last two months. It almost feels weird blogging again but there is going to be a lot to write about in the coming weeks given the huge amount of research underway. Something really hit home the other day when I was finishing up a research project. Every day I learn more about computer security, yet every day – on a percentage basis – I know less about computer security. Despite continuous research and learning, the field grows what seems like an exponential rate. The number of new subject areas, threats and response techniques grows faster than any person can keep up with. I was convinced that in the 90s I could ‘know’ pretty much all you needed to know about computer security; that concept is now laughable. Every new thing that has electrons running through it creates a new field for security. Hacking pacemakers and power meters and vehicle computer is not surprising, and along with it the profession continues to grow far beyond a single topic to hundreds of sciences, with different distinct attack and defense perspectives. No person has a hope of being an expert in more than a couple sub-disciplines. And I think that is awesome! Every year there is new stuff to learn, both the ‘shock and awe’ attack side, as well as the eternally complex side of defense. What spawned this train of thought was Black Hat this year, where I saw genuine enthusiasm for security, and in many cases for some very esoteric fields of study. My shuttle bus on the way to the airport was loaded with newbie security geeks talking about how quantum computing was really evolving and going to change security forever. Yeah, whatever; the point was the passion and enthusiasm they brought to Black Hat and BSides. Each conversation I overheard was focused on one specific area of interest, but the discussions quickly led them into other facets of security they may not know anything about – social engineering, encryption, quantum computing, browser hacking, app sec, learning languages and processors and how each subsystem works together … and on and on. Stuff I know nothing about, stuff I will never know about, yet many of the same type of attacks and vulnerabilities against a new device. Since most of us here at Securosis are now middle-aged and have kids, it’s fun for me to see how each parent is dealing with the inevitability of their kids growing up with the Internet of Things. Listening to Jamie and Rich spin different visions of the future where their kids are surrounded by millions of processors all trying to alter their reality in some way, and how they want to teach their kids to hack as a way to learn, as a way to understand technology, and as a way to take control of their environment. I may know less and less, but the community is growing vigorously, and that was a wonderful thing to witness. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich on Threatpost- How I Got Here. I got to do my third favorite thing, talk about myself. Dave Mortman on Big Data Security Challenges. Mike’s DR column “Prohibition for 0-day Exploits”. Mike quoted in CRN about Proofpoint/Armorize deal. Favorite Securosis Posts Rich: The CISO’s Guide to Advanced Attackers. Mike’s latest paper is great. Especially because I keep having people thank me for writing it when he did all the work. And no, I don’t correct them. Adrian Lane: Hygienically Challenged. After 10 weeks of travel, I’m all too familiar with this element of travel. But after 3 days fishing and hiking in the Sierra’s I was one of these people. Sorry to the passengers on that flight. David Mortman: Research Scratchpad: Stateless Security. Mike Rothman: Lockheed-Martin Trademarks “Cyber Kill Chain”. “Cyberdouche” Still Available. A post doesn’t have to be long to be on the money, and this one is. I get the need to protect trademarks, but for that right you’ll take head shots. Cyberdouche FTW. Other Securosis Posts “Like” Facebook’s response to Disclosure Fail. Research Scratchpad: Stateless Security. New Paper: The 2014 Endpoint Security Buyer’s Guide. Incite 8/21/2013 — Hygienically Challenged. Two Apple Security Tidbits. Lockheed-Martin Trademarks “Cyber Kill Chain”. “Cyberdouche” Still Available. IBM/Trusteer: Shooting Across the Bow of the EPP Suites. New Paper: The CISO’s Guide to Advanced Attackers. Favorite Outside Posts Adrian Lane: Making Sense of Snowden. Look at my comments in Incite a couple weeks back and then read this. Chris Pepper: Darpa Wants to Save Us From Our Own Dangerous Data. Rich: Facebook’s trillion-edge, Hadoop-based and open source graph processing engine. David Mortman: Looking inside the (Drop) box. Mike Rothman: WRITERS ON WRITING; Easy on the Adverbs, Exclamation Points and Especially Hooptedoodle. Elmore Leonard died this week. This article he wrote for the NYT sums up a lot about writing. Especially this: “If it sounds like writing, I rewrite it.” Research Reports and Presentations The 2014 Endpoint Security Buyer’s Guide. The CISO’s Guide to Advanced Attackers. Defending Cloud Data with Infrastructure Encryption. Network-based Malware Detection 2.0: Assessing Scale, Accuracy and Deployment. Quick Wins with Website Protection Services. Email-based Threat Intelligence: To Catch a Phish. Network-based Threat Intelligence: Searching for the Smoking Gun. Understanding and Selecting a Key Management Solution. Building an Early Warning System. Implementing and Managing Patch and Configuration Management. Top News and Posts Hackers for Hire. Bradley Manning Sentenced to 35 Years in Prison Declassified Documents Prove NSA Is Tapping the Internet ‘Next Big’ Banking Trojan Spotted In Cybercrime Underground How the US (probably) spied on European allies’ encrypted faxes Researcher finds way to commandeer any Facebook account from his mobile phone Blog Comment of the Week This week’s best comment goes to michael hyatt, in response to Research Scratchpad: Stateless Security. I think we’re working our way in that direction, though not as explicitly as you define it. But while

Share:
Read Post

“Like” Facebook’s response to Disclosure Fail

Every company makes mistakes, especially when it comes to researchers disclosing security bugs and/or vulnerabilities. And when the frustrated researcher goes public and makes a scene, the company has a few choices. Break out the lawyers. Throw mud at the researcher in the press. Own the mistake and try to fix it. Yes there are other options. But we tend to see #1 and #2 a lot more than we see #3. Which is why I “like” (to use Facebook’s terminology) how they responded to the issue. The researcher in question basically showed how he could post to Zuckerberg’s timeline (yes, the CEO). That would usually cause some lawyerly type of activity from a company. But this was their response: I’ve reviewed our communication with this researcher, and I understand his frustration. He tried to report the bug responsibly, and we failed in our communication with him. Um, that’s pretty clear. Facebook accepted responsibility. They took their lumps, which is what they should do. They did explain that there wasn’t sufficient detail in the bug report, so it got routed incorrectly. But all the same, they didn’t shy away from their part in the situation. But far too many company’s don’t do that. But it gets better because Joe Sullivan, Facebook’s CSO, commits to a few changes to the program. We will make two changes as a result of this case: (1) We will improve our email messaging to make sure we clearly articulate what we need to validate a bug, and (2) we will update our whitehat page with more information on the best ways to submit a bug report. Now they still won’t pay a bounty because the vulnerability was proven against a real user (yes the CEO). But some folks in the security community, lead by Marc Maiffret, banded together and raised over $12K for the guy anyway. Win win. Which rarely happens when you are talking about vulnerability disclosure. Share:

Share:
Read Post

Research Scratchpad: Stateless Security

Here’s another idea I’ve been playing with. As I spend more time playing with various cloud and infrastructure APIs, I’m starting to come around to the idea of Stateless Security. Here’s what I mean: Right now, a reasonable number of our security tools rely on their own internal databases for tracking state. Now for something like IPS this isn’t a problem, but there are a lot of other functions that have to rely on potentially stale data since there are only so many times we can run security checks before pissing off the rest of the infrastructure. Take configuration and vulnerability management — we tend to lack even an accurate idea of our assets and have to scan the heck out of our environment to keep track of things. But as both security tools and infrastructure expose APIs, we can use Software Defined Security to pull data, in real time, from the most canonical source, rather than relying on synchronization or external scanning. Take the example I wrote up in my SecuritySquirrel proof of concept. We pull a real time snapshot of running instances directly from the cloud, then correlate it with a real time feed from our configuration management tool in order to quickly identify any unmanaged servers. I originally looked at building a simple database to track everything, but quickly realized I could handle it more quickly and accurately in memory resident code. Even 100,000 servers could easily be managed like this with the memory in your laptop (well, depending on the responsiveness of the API calls). The more I think about it, the more I can see a lot of other use cases. We could pull data from various security tools and the infrastructure itself, performing real time assessments instead of replicating databases. Now it won’t work everywhere, and maybe not even in the majority of cases, but especially as we add more API enabled infrastructure and applications it seems to open a lot of doors. Using a software defined network? Need to know the real-time route to a particular server and correlate with firewall rules based on a known vulnerability? With stateless security this is potentially a few dozen lines of code (or less) that could trigger automatically anytime a new vulnerability is either detected or an advisory released (just add your threat intelligence feed). The core concept is, wherever possible, pull state in real time from the most canonical source available. I’m curious what other people think about this idea. Share:

Share:
Read Post

New Paper: The 2014 Endpoint Security Buyer’s Guide

Our updated and revised 2014 Endpoint Security Buyer’s Guide updates our research on key endpoint management functions, including patch and confirmation management and device control. We have also added coverage of anti- … malware, mobility, and BYOD. All very timely and relevant topics. The bad news is that securing endpoints hasn’t gotten any easier. Employees still click things, and attackers have gotten better at evading perimeter defenses and obscuring attacks. Humans, alas, remain gullible and flawed. Regardless of any training you provide employees, they continue to click stuff, share information, and fall for simple social engineering attacks. So endpoints remain some of the weakest links in your security defenses. As much as the industry wants to discuss advanced attacks and talk about how sophisticated adversaries have become, the simple truth remains that many successful attacks result from simple operational failures. So yes, you do need to pay attention to advanced malware protection tactics, but if you forget about the fundamental operational aspects of managing endpoint hygiene, the end result will be the same. The goal of this guide remains to provide clear buying criteria for those of you looking at endpoint security solutions in the near future. The landing page is in our Research Library. You can also download The 2014 Endpoint Security Buyer’s Guide (PDF) directly. We would like to thank Lumension Security for licensing the content in this paper. Obviously we wouldn’t be able to do the research we do, or offer it to you without cost, without companies supporting our work. Share:

Share:
Read Post

Incite 8/21/2013: Hygienically Challenged

I spend a lot of time in public places. I basically work in coffee shops and spend more than my fair share of time in airports and restaurants. There is nothing worse than being in the groove, banging out a blog post, and then catching a whiff of someone – before I can see them. I start to wonder if the toilet backed up or something died in the wall. Then I look around the coffee shop and notice the only open table is next to you. no. No. NO. Yes, the sticky dude sits right next to you. Now I’m out of my productivity zone and worried about whether the insides of your nostrils are totally burned out. Sometimes I’m tempted to carry some Tiger Balm with me, just to put under my nose when in distress. Yes it would burn like hell, but that’s better than smelling body odor (BO) for the next couple of hours. It’s not just BO. How about those folks that bathe in stinky perfume? Come on Man! The Boy had a tutor once that just dumped old lady perfume on. I wonder if she thought we were strange because we had all the windows in the house open in the middle of winter. Finally the Boss had to tell her the perfume was causing an allergic reaction. Seems we’re all allergic to terrible perfume. I just don’t get it. Do these folks not take a minute to smell their shirt before they emerge from the house? Do they think the smell of some perfumes (like the scent that smells like blood, sweat and spit) is attractive or something? Do they have weak olfactory senses? Do they just not care? I know some cultures embrace natural human smells. But not the culture of Mike. If you stink, you should bathe and wear clean clothes. If you leave a trail of scent for two hours after you leave, you may be wearing too much perfume. There’s got to be a Jeff Foxworthy joke in there somewhere. What should I do? There are no other tables available in the coffee shop. I could throw in the towel and move to a different location. I could suggest to the person they are hygienically challenged and ask them to beat it. I could go all passive aggressive and tattle to the barristas, and ask them to deal with it. Maybe I’ll get one of those nose clips the kids wear when swimming to keep my nostrils closed. But I’ll do none of the above. What I’ll do is sit there. I won’t be chased away by some smelly dude. I mean, I paid my $2.50 to sit here as long as I want. So I pull the cover off my coffee and take a big whiff of java every 10 seconds or so to chase away the stench. By the way, it’s hard to type when you are inhaling coffee fumes. It’s unlikely I’ll get a lot done, but I have no where else to be, I can just wait it out. Which is stupid. My ridiculous ego won’t accept that body odor is likely covered under the 1st Amendment, so I couldn’t make the guy leave even if I wanted to. I’ll suffer the productivity loss to prove nothing to no one, instead of hitting another of the 10 coffee shops within a 5 mile radius of wherever I am. Thankfully I have legs that work and a car that drives. I can just go somewhere else, and I should. Now when the stinky dude occupies the seat next to you on a 7 hour flight, that’s a different story. There is no where to go, but 30,000 feet down. In that case, I’ll order a Jack and Coke, even at 10 in the morning. I’ll accidentally spill it. OOPS. You have to figure the waft of JD > BO every day of the week. -Mike Photo credit: “body_odor“_ originally uploaded by istolethetv Heavy Research We’re back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Ecosystem Threat Intelligence The Risk of the Extended Enterprise Continuous Security Monitoring Migrating to CSM The Compliance Use Case The Change Control Use Case The Attack Use Case Classification Defining CSM Why. Continuous. Security. Monitoring? Database Denial of Service Countermeasures Attacks Introduction API Gateways Implementation Key Management Developer Tools Newly Published Papers The CISO’s Guide to Advanced Attackers Defending Cloud Data with Infrastructure Encryption Network-based Malware Detection 2.0: Assessing Scale, Accuracy, and Deployment Quick Wins with Website Protection Services Email-based Threat Intelligence: To Catch a Phish Network-based Threat Intelligence: Searching for the Smoking Gun Incite 4 U Define “Integration”: So Forrester’s Rick Holland took the time machine for a spin advocating for security solution integration and the death of point solutions. Nothing like diving back into the murky waters of the integrated suite vs. best of breed issue. It’s not like a lot has changed. Integration helps reduce complexity, at the alleged cost of innovation since it’s mostly big, lumbering companies that offer integrated solutions. That may be an unfair characterization, but it’s been mostly true. Then he uses an example of FireEye’s partnerships as a means to overcome this point solution issue. Again, not new. The security partner program has been around since Check Point crushed everyone in the firewall market with OPSEC in an effort to act big, even as a start-up. But the real question isn’t whether a vendor has good BD folks that can get contracts signed. It’s whether the solutions are truly integrated. And unless the same company owns the technologies, any integrations are a matter of convenience, not necessity. – MR Movies are real: Yesterday I had an interview with a mainstream reporter about some of the research presented at DEF CON this year. Needless to say, there was the ubiquitous “terrorism” question. It

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.