Securosis

Research

Microsoft Offers Six Figure Bounty for Bugs

From the BlueHat blog, Microsoft’s security community outreach: In short, we are offering cash payouts for the following programs: Mitigation Bypass Bounty – Microsoft will pay up to $100,000 USD for truly novel exploitation techniques against protections built into the latest version of our operating system (Windows 8.1 Preview). Learning about new exploitation techniques earlier helps Microsoft improve security by leaps, instead of one vulnerability at a time. This is an ongoing program and not tied to any event or contest. BlueHat Bonus for Defense – Microsoft will pay up to $50,000 USD for defensive ideas that accompany a qualifying Mitigation Bypass Bounty submission. Doing so highlights our continued support of defense and provides a way for the research community to help protect over a billion computer systems worldwide from vulnerabilities that may not have even been discovered. IE11 Preview Bug Bounty – Microsoft will pay up to $11,000 USD for critical vulnerabilities that affect IE 11 Preview on Windows 8.1 Preview. The entry period for this program will be the first 30 days of the IE 11 Preview period. Learning about critical vulnerabilities in IE as early as possible during the public preview will help Microsoft deliver the most secure version of IE to our customers. This doesn’t guarantee someone won’t sell to a government or criminal organization, but $100K is a powerful incentive for those considering putting the public interests at the forefront. Share:

Share:
Read Post

Security Analytics with Big Data: Deployment Issues

This is the last post in our Security Analytics with Big Data series. We will end with a discussion of deployment issues and concerns for any big data deployment, and focus on issues specific to leveraging SIEM. Please remember to post comments or ask questions and I will answer in the comments. Install any big data cluster or SIEM solution that leverages big data, and you will notice that the documentation focuses on how to get up and running quickly and all the wonderful things you can do with the platform. The issues you really want to consider are left unsaid. You have to go digging for problems, but better find them now than after you deploy. There are several important items, but the single biggest challenge today is finding talent to help program and manage big data. Talent, or Lack Thereof One of the principal benefits of big data clusters is the ability to apply different programmatic interfaces, or select different query and data management paradigms. This is how we are able to do complex analytics. This is how we get better analyses from the cluster. The problem is that you cannot use it if you cannot code it. The people who manage your SIEM are probably not developers. If you have a Security Operations Center (SOC), odds are many of them have some scripting and programming experience, but probably not with big data. Today’s programmatic interfaces mean you need programmers, and possibly data architects, who understand how to mine the data. There is another aspect. When we talk to big data project architects, like SOC personnel trying to identify attacks in event data, they don’t always know what they are looking for. They find valuable information hidden in the data, but this isn’t simply the magic of querying a big data cluster – the value comes from talented personnel, including statisticians, writing queries and analyzing the results. After a few dozen – or hundred – rounds of query and review, they start finding interesting things. People don’t use SIEM this way. They want to quickly set a policy and have it enforced. They want alerts on malicious activity with minimal work. Those of you not using SIEM, who are building a security analytics cluster from scratch, should not even start the project without an architect to help with system design. Working from your project goals, the architect will help you with platform selection and basic system design. Building the system will take some doing as well as you need someone to help manage the cluster and programmers to build the application logic and data queries. And you will need someone versed in attacker behaviors to know what to look for and help the programmer stitch things together. There are only a finite number of qualified people out there today who can perform these roles. As we like to say in development, the quality of the code is directly linked to the quality of the developer. Bad developer, crappy code. Fortunately many big data scientists, architects, and programmers are well educated, but most of them are new to both big data and security. That brilliant intern out of Berkeley is going to make mistakes, so expect some bumps along the way. This is one area where you need to consider leveraging the experience of your SIEM vendor and third parties in order to see your project through. Policy Development Big data policy development is hard in the short term. Because as we mentioned above you cannot code your own policies without a programmer – and possibly a data architect and a statistician. SIEM vendors will eventually strap on abstraction interfaces to simplify big data query development but we are not there yet. Because of this, you will be more dependent on your SIEM vendor and third party service providers than before. And your SIEM vendor has yet to build out all the capabilities you want from their big data infrastructure. They will get there, but we are still early in the big data lifecycle. In many cases the ‘advancements’ in SIEM will be to deliver previously advertised capabilities which now work as advertised. In other cases they will offer considerably deeper analysis because the queries run against more data. Most vendors have been working in this problem space for a decade and understand the classic technical limitations, but they finally have tools to address those issues. So they are addressing their thorniest issues first. And they can buttress existing near-real time queries with better behavioral profiles, provide slightly better risk analysis by looking at more data, of more types. One more facet of this difficulty merits a public discussion. During a radical shift in data management systems, it is foolish to assume that a new (big data or other) platform will use the same queries, or produce exactly the same results. Vet new and revised queries on the new platforms to verify they yield correct information. As we transition to new data management frameworks and query interfaces, the way we access and locate data changes. That is important because, even if we stick to a SQL-like query language and run equivalent queries, we may not get exactly the same results. Whether better, worse, or the same, you need to assess the quality of the new results. Data Sharing and Privacy We have talked about the different integration models. Some customers we spoke with want to leverage existing (non-security) information in their security analytics. Some are looking at creating partial copies of data stored in more traditional data mining systems, with the assumption that lower cost commodity storage make the iterative cost trivial. Others are looking to derive data from their existing clusters and import that information into Hadoop or their SIEM system. There is no ‘right’ way to approach this, and you need to decide based on what you want to accomplish, whether existing infrastructure provides benefits big data cannot, and any network bandwidth issues with moving information between these systems. If you

Share:
Read Post

Network-based Malware Detection 2.0: Deployment Considerations

As we wrap up Network-based Malware Detection 2.0, the areas of most rapid change have been scalability and accuracy. That said, getting the greatest impact on your security posture from NBMD requires a number of critical decisions. You need to determine how the cloud fits into your plans. Early NBMD devices evaluated malware within the device (on-box sandbox), but recent advances and new offerings have moved some or all the analysis to cloud compute farms. You also need to figure out whether to deploy the device inline, in order to block malware before it gets in. Blocking whatever you can may sound like an easy decision, but there are trade-offs to consider – as there always are. To Cloud or Not to Cloud? On-box or in-cloud malware analysis has become one of those religious battlegrounds vendors use to differentiate their offerings from one another. Each company in this space has a 70-slide deck to blow holes in the competition’s approach. But we have no use for technology religion so let’s take an objective look at the options. Since the on-box analysis of early devices, many recent offerings have shifted to cloud-based malware analysis. The biggest advantage to local analysis is reduced latency – you don’t need to send the file anywhere so you get a quick verdict. But there are legitimate issues with on-device analysis, starting with scalability. You need to evaluate every file that comes in through every ingress point unless you can immediately tell that it’s bad from a file hash match. That require an analysis capability on every Internet connection to avoid missing something. Depending on your network architecture this may be a serious problem, unless you have centralized both ingress and egress to a small number of locations. But for distributed networks with many ingress points the on-device approach is likely to be quite expensive. In the previous post we presented the 2nd Derivative Effect (2DE), whereby customers benefit from the network effect of working with a vendor who analyzes a large quantity of malware across many customers. The 2DE affects the cloud analysis choice two ways. First, with local analysis, malware determinations need to be sent up to a central distribution point, normalized, de-duped, and then distributed to the rest of the network. That added step extends the window of exposure to the malware. Second, the actual indicators and tests need to be distributed to all on-premise devices so they can take advantage of the latest tests and data. Cloud analysis effectively provides a central repository for all file hashes, indicators, and testing – significantly simplifying data management. We expect cloud-based malware analysis to prevail over time. But your internal analysis may well determine that latency is more important than cost, scalability, and management overhead – and we’re fine with that. Just make sure you understand the trade-offs before making a decision. Inline versus out-of-band The next deployment crossroads is deciding where NMBD devices sits in the network flow. Is the device deployed inline so it can block traffic? Or will it be used more as a monitor, inspecting traffic and sending alerts when malware goes past? We see the vast majority of NBMD devices currently deployed out-of-band – delaying the delivery of files during analysis (whether on-box or in the cloud) tends to go over like a lead balloon with employees. They want their files (or apps) now, and they show remarkably little interest in how controlling malware risk may impact their ability to work. All things being equal, why wouldn’t you go inline, for the ability to get rid of malware before it can infect anything? Isn’t that the whole point of NBMD? It is, but inline deployment is a high wire act. Block the wrong file or break a web app and there is hell to pay. If the NBMD device you championed goes down and fails closed – blocking everything – you may as well start working on your resume. That’s why most folks deploy NBMD out-of-band for quite some time, until they are comfortable it won’t break anything important. But of course out-of-band deployment has its own downsides, well beyond a limited ability to block attacks before it’s too late. The real liability with out-of-band deployment is working through the alerts. Remember – each alert requires someone to do something. The alert must be investigated, and the malware identified quickly enough to contain any damage. Depending on staffing, you may be cleaning up a mess even when the NBMD device flags a file as malware. That has serious ramifications for the NMBD value proposition. In the long run we don’t see much question. NBMD will reside within the perimeter security gateway. That’s our term for the single box that encompasses NGFW, NGIPS, web filter, and other capabilities. We see this consolidation already, and it will not stop. So NMBD will inherently be inline. Then you get a choice of whether or not to block certain file types or malware attacks. Architecture goes away as a factor, and you get a pure choice: blocking or alerting. Deploying the device inline gives the best of both worlds and the choice. The Egress Factor This series focuses on the detection part of the malware lifecycle. But we need to at least touch on preventative techniques available to ensure bad stuff doesn’t leave your network, even if the malware gets in. Remember the Securosis Data Breach Triangle. If you break the egress leg and stop exfiltration you have stopped the breach. It’s simple to say, but not to do. Everything is now encapsulated on port 80 or 443, and we have new means of exfiltration. We have seen tampering with consumer storage protocols (Google Drive/Dropbox) to slip files out of a network, as well as exfiltration 140 characters at a time through Twitter. Attackers can be pretty slick. So what to do? Get back to aggressive egress filtering on your perimeter and block the unknown. If you cannot identify an application in the outbound stream, block it. This requires NGFW-type application inspection and classification capabilities and a broad application library, but ultimately

Share:
Read Post

Project Communications

A note on project management: One client was quite disappointed with me for not showing progress as I went along and said “Fast iteration is better than delayed perfection,” while another client was mad at me because “you’re trickling again,” – showing progress but not a finished product (a\k\a delayed perfection)… A gentle smack upside the head: ask clients how they prefer to deal with project communications! They know what they want and how they want it, and you’d better RECOGNIZE. Note from Rich: In my consulting days I always tried to feel out the client and put reporting expectations in the proposal. Makes everyone happier. Share:

Share:
Read Post

API Gateways: Access Provisioning

What do we want? API Access! When do we want it? Now! I’s time to change your entire mindset. We’re talking about API security, but not for traditional APIs. API gateways are a response to the “open API” movement, and create a very different development environment. As we mentioned in our introduction, API gateways are an enabling technology, but likely not the way you think they are. Companies want to expose their services to a wide audience, but rather than design and build consumer offerings in-house they often provide API access to their services to contractors – and in many cases to the general programming community. For companies like Twitter, Facebook, and YouTube (Google), the trend is to allow third-party developers to extend and integrate these platforms to provide novel new user experiences. It’s a win/win: the company gets to leverage innovations from the third-party community, users get better apps, and developers get paid (the average force.com developer makes $392k/year for their work. It can be an almost free way to leverage the best ideas in the development world – you just have to accept the risk of random people groping around your services and data. Leveraging the community for innovation and pro bono development raises some new security problems – how can you control your API while actively making it available outside your company? Not that many years ago, companies wrestled with serving up data to consumers outside the firewall – letting them write code to run on top of proprietary systems is downright scary. To provision developer access, and to control what they can use and how, you need some form of API management framework. API gateways are this framework. From the developer’s perspective they function like a traditional development environment: they bundle a number of features under one umbrella to provide basic tools developers need and make API integration as simple as possible. For the API provider, rich and accessible services encourage developers. The flipside is trying to manage developers who don’t work for your company, and giving up control over endpoints and user experience. Additionally, you need to control access and features through tokens and keys. 80% of your API gateway effort will focus on what developers need to leverage your service, but the most difficult 20% will be managing developers’ experience and exposing services, which requires attention on ease of use and hiding complexity from developers. API gateways are for extending features to developers, so most of our examples are from the development perspective. Our outline follows the path developers will take to use your APIs. We will start from ground zero, as developers register themselves to use your service, considering how you will provision developer access. Per the outline in our introduction we will then move into other areas of development tools, key management, and other critical areas of API security. On our journey we will straddle two realms: buyers and builders. For builders, we will show you examples of features you need to build into your platform. For those of you looking to acquire an API gateway, consider this a mirror image of your critical platform criteria and where you will need services to get your deployment over the finish line. Provisioning As simple as it may seem, provisioning for API gateways is a balancing act. On the one hand companies want simple streamlined access for developers to build functionality. On the other hand they want to ensure this all complies with security policies. How can you ensure security while providing developers with full access? What process will ensure the right mix of policy checkpoints without hampering developers? Therein lies the rub. Let’s look at a developer’s first step: getting access to the development environment. Developer access provisioning Perhaps you have heard that developers can be a tad mercurial? Development is about building and enabling, so security controls which restrict usage or limit functions are seen as an impediment and source of friction. Keeping developers on board with security policy is a challenge, especially when any number of them don’t even work for your company. Development tools are typically selected for ease of use, so streamlining access to tools and simplifying access to API functionality is critical. API gateways proxy communications to applications – they act as traffic cops to direct application requests according to policy. That middle ground is a vital place for security to focus for three reasons: It is a boundary between internal and external, making it an ideal place for policy enforcement. It is a logical place to monitor inbound and outbound access. It is where developers get everything they need to create applications. What do developers need to get started coding? They need to be vetted to the API, which means they need to get credentials. These credentials come as tokens and possibly certificates. API gateways should provide what developers need to find and bind to your API to begin coding. First, developers generally need to register with the gateway to initiate the key issuance process and get credentials to your API. This may require a few minutes for a simple automated process, or much longer for requests which require manual review. Once accepted, developers receive credentials – often only to a development and testing instance, with production access to follow. How this process works, and how simple it is to implement, are important factors for selecting an API gateway. How well can your candidates be tuned to your organizational needs? When building an API gateway, be realistic about what developers will tolerate in terms of delay and complexity. Grand processes with many steps tend to stop developers in their tracks. The API documentation is another major factor in simplifying developers’ lives. The favorite words of many developers are “for example”, typically followed by a code snippet and a usage explanation. The goal is to get developers up and running quickly so look for code samples, reference implementation, and test clients when you evaluate API gateways. A wide variety of languages are in play, so over time you will likely your own miniature

Share:
Read Post

Friday Summary: June 14, 2013

Are you aware of a theft of big data? I will ask in a slightly different way: Do you know of any instance where a commercial big data cluster was exposed to an attacker who mined the cluster for fun or profit? Hackers are unlikely to copy a big data set – why bother moving terabytes when they can use your cluster to store and process your data. I am unaware of any occurrences, public or private. And no, LexisNexis and ChoicePoint, where the attackers had valid user credentials, don’t count. Please comment if you know of an example. I ask because I have been reading about how vendors are combatting the billions of dollars of theft in the big data space, but I am unaware of any such big data cyberthefts. In fact I have not heard of one dollar being stolen. Unless you count the NSA collection of vast amounts of personal data as thieving, but I hope we can agree that is different in several ways. So my question stands: “Who was attacked? Where did the thefts occur?” I don’t want to deprecate security around big data clusters because we have not yet seen an attack – we do need cluster security, and I am certain we will eventually see attacks. But hyperbole won’t help anyone. Executive management teams have heard this FUD before. In the early days before CISO’s, security cried “Vulnerabilities will eat your grandmother!” one too many times, and management turned their collective backs. This round of FUD will not help IT teams get budget or implement security in and around big data clusters. Another question: Are you aware of any security analytics tools, policies, algorithms, or MapReduce queries that can detect a big data breach? I doubt it. Seriously doubt it. The application of big data and data mining for security is focused on fraud detection and bettering SIEM threat detection capabilities. As of this writing no SIEM tool protects big data. No one has written a MapReduce query to find “the bad buys” illegally using a big data cluster. Today that capability does not exist. We have only the most basic monitoring features to detect misuse of big data clusters from the Database Activity Monitoring vendors – they are so limited that they are barely worth mentioning. Of course I expect all this to change. We will see attacks on big data, and we will see more security tools focused on protecting it, and we will use analytics to detect misuse there as well as everywhere else. When that will change, I cannot say. After the first few big data breaches, perhaps? On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s DR post: Why Database Assessment. Adrian’s white paper on What Every DBA Should Know. Favorite Securosis Posts Adrian Lane: Getting to Know Your Adversary. Mike Rothman: Security Analytics with Big Data: Integration. All Adrian needs is to mention either BYOD or APT in this blog series to hit the security marketing hyperbole trifecta! Kidding aside, he is doing a good job structuring the discussion of how to leverage big data to solve security problems. Other Securosis Posts We are all guilty of something. Talking Head Alert: Mike on Phishing Webcast. Incite 6/12/2013: The Wall of Worry. The Securosis Nexus Beta 2 Begins! Network-based Malware Detection 2.0: The Network’s Place in the Malware Lifecycle. Security Analytics with Big Data: Integration. DDoS: It’s FUD-eriffic! Quick thoughts on the iOS and OS X security updates. Groupthink Kills Your Security Layers. A truism of security information sharing. Getting to Know Your Adversary. Friday Summary: June 7, 2013. Favorite Outside Posts Rich: Gartner Reveals Top 10 IT Security Myths. Not sure this is the top 10 but it is a good list. Item 3 lacks nuance, however. Adrian Lane: Upcoming revelations speculations. Robert Graham has been on a roll lately. This ‘revelations’ post is a fun read, throwing scenarios out there and seeing what’s plausible, furthering the Snowden Leaks story line. The Skype speculation is unsettling – it is both entirely plausible and simultaneously sounds totally insane to normal people: two common elements of many declassified cold war stories. Mike Rothman: Sacke Notes: Cofficers – A New Breed in a New Economy. I will probably use this as an Incite topic, but it’s a pretty good view into my working lifestyle. In fact I am in my coffee shop office now putting this link in. How perfect is that? Research Reports and Presentations Email-based Threat Intelligence: To Catch a Phish. Network-based Threat Intelligence: Searching for the Smoking Gun. Understanding and Selecting a Key Management Solution. Building an Early Warning System. Implementing and Managing Patch and Configuration Management. Defending Against Denial of Service (DoS) Attacks. Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments. Tokenization vs. Encryption: Options for Compliance. Pragmatic Key Management for Data Encryption. The Endpoint Security Management Buyer’s Guide. Top News and Posts The Secret War. Profile of the man running US ‘cyber-war’ efforts. Microsoft Disrupts Citadel botnet. Facebook Unveils Presto. Speedy replacement for Hive. Cyber Security and the Second Amendment. Banker’s Nap Costs Millions. Lawsuit filed over NSA phone spying program. Microsoft Security Bulletin Summary for June 2013. Democratic Senator Defends Phone Spying, And Says It’s Been Going On For 7 Years. Expert Finds XSS Flaws on Intel, HP, Sony, Fujifilm and Other Websites. Blog Comment of the Week This week’s best comment goes to -ds, in response to A truism of security information sharing. Maybe information sharing will be easier now that we know the NSA have it all already. Share:

Share:
Read Post

Risk Management: Proto-Science

Alex Hutton has been on the leading edge of IT security risk management as long as I have known him. He has a new blog, and if you don’t think we can ever quantify risk, you need to read this post The next age of risk management, science, & craftsmanship: And that’s the crux of the third age, the move to what I’ve past referred to as a Modern Approach to Risk Management (borrowing heavily from the white page of the same name). Forward thinking programs are blending things like fraud analytics, InfoSec controls, and risk modeling so that there is no longer a boundary between these disciplines. Even folks who are grumpy sticks in the mud about risk, big data and so forth have had to acknowledge the benefits of at least basic “Data Science” methods. Alex is using some of these techniques in the real world. I have always challenged any quantitative risk modeler to show me a model that consistently and reasonably accurately predicts security outcomes. A few people are close, but not likely using any of the models you have been taught. Alex and some others, including Jack Jones, are taking a scientific approach and slowly making progress. I expect that some day during my career a model will pass my risk management test, thanks to their hard work. That will change our profession dramatically. Share:

Share:
Read Post

We are all guilty of something

Moxie Marlinspike has a must-read editorial over at Wired: For instance, did you know that it is a federal crime to be in possession of a lobster under a certain size? It doesn’t matter if you bought it at a grocery store, if someone else gave it to you, if it’s dead or alive, if you found it after it died of natural causes, or even if you killed it while acting in self defense. You can go to jail because of a lobster. If the federal government had access to every email you’ve ever written and every phone call you’ve ever made, it’s almost certain that they could find something you’ve done which violates a provision in the 27,000 pages of federal statues or 10,000 administrative regulations. You probably do have something to hide, you just don’t know it yet. I’ve mostly stayed away from the recent NSA news because it isn’t infosec per se. But here’s the thing: private businesses are collecting what are essentially our innermost thoughts (search engines, email, writing, what you read online, etc.) – never mind our physical locations and physical actions. If someone in a position of power decides to look at you they will find something. I recently had a friend threatened, very directly, merely for speaking out against something innocuous in a public forum. I support our government and law enforcement, but I also believe in privacy and appropriate checks and balances in the system. The NSA likely hasn’t done anything illegal, but the laws themselves are the issue. These are good people doing the job we put before them, but we neglected to have the serious social discussion about the potential consequences first. I will step down off the soapbox now. Share:

Share:
Read Post

Talking Head Alert: Mike on Phishing Webcast

If you have nothing better to do tomorrow at 2 pm EDT, and want to learn a bit about what’s new in phishing (there is a lot of it, but that’s not new) and how to use email-based threat intelligence to deal with it, join me and the folks from Malcovery Security on a webcast tomorrow. I will be covering the content in the Email-based Threat Intelligence paper, and the folks from Malcovery will be sharing a bunch of their research into phishing trends. It should be an interesting event, so don’t miss it… You can register now. Share:

Share:
Read Post

Incite 6/12/2013: The Wall of Worry

Anxiety is something we all deal with on a daily basis. It is a feature of the human operating system. Maybe it’s that mounting pile of bills, or an upcoming doctor’s appointment, or a visit from your in-laws, or a big deadline at work. It could be anything but the anxiety triggers our fight or flight mechanisms, causes stress, and takes a severe toll over time on our health and well being. Culturally I come from a long line of worriers. Neuroses are just something we get used to, because everyone I know has them (including me) – some are just more vocal about it than others. I think every generation thinks they have it tougher than the previous. But this isn’t a new problem. It’s the same old story, although things do happen faster now and bad news travels instantaneously. I stumbled across a review of a 1934 book called You Can Master Life, which put everything into context. If you recall, 1934 was a pretty stressful time in the US. There was this little thing called the Great Depression, and it screwed some folks up. I recently learned my great-grandfather lost the bank he owned at the time, so I can only imagine the strain he was under. The book presents a worry table, which distinguishes between justified and unjustified worries and then systematically reasons why you don’t need to worry about most things. For instance it seems this fellow worried about 40% of the time about disasters that never happened, and another 30% about past actions that he couldn’t change. Right there, 70% of his worry had no basis in reality. When he was done he had figured out how to eliminate 92% of his unjustified fears. So what’s the secret to defeating anxiety? What, of this man, is the first step in the conquest of anxiety? It is to limit his worrying to the few perils in his fifth group. This simple act will eliminate 92% of his fears. Or, to figure the matter differently, it will leave him free from worry 92% of the time. Of course that assumes you have rational control over what you worry about. And who can really do that? I guess what works best for me is to look at it in terms of control. If I control it then I can and should worry. If I don’t I shouldn’t. Is NSA surveillance (which Adrian and I discuss below) concerning? Yes. Can I really do anything about it – beyond stamping my feet and blasting the echo chamber with all sorts of negativity? Nope. I only control my own efforts and integrity. Worrying about what other folks do, or don’t do, doesn’t help my situation. It just makes me cranky. They say Wall Street climbs a wall of worry, and that’s fine. If you spend your time climbing a similar wall of worry you may achieve things, but it will be at great cost. Not just to you but to those around you. Take it from me – I know all about it. To be clear, this is fine tuning stuff. I would not ever minimize the severity of a medical anxiety disorder. Unfortunately I have some experience with that as well, and folks who cannot control their anxiety need professional help. My point is that for those of us who just seem to find things to worry about, a slightly different attitude and focus on things you can control can do wonders to relieve some of that anxiety and make your day a bit better. –Mike Photo credit: “Stop worrying about pleasing others so much, and do more of what makes you happy.” originally uploaded by Live Life Happy Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. API Gateways Security Enabling Innovation Security Analytics with Big Data Integration New Events and New Approaches Use Cases Introduction Network-based Malware Detection 2.0 The Network’s Place in the Malware Lifecycle Scaling NBMD Evolving NBMD Advanced Attackers Take No Prisoners Quick Wins with Website Protection Services Deployment and Ongoing Management Protecting the Website Are Websites Still the Path of Least Resistance? Newly Published Papers Email-based Threat Intelligence: To Catch a Phish Network-based Threat Intelligence: Searching for the Smoking Gun Understanding and Selecting a Key Management Solution Building an Early Warning System Implementing and Managing Patch and Configuration Management Incite 4 U Snowing the NSA: Once again security (and/or monitoring) is front and center in the media this week. This time it’s the leak that the NSA has been monitoring social media and webmail traffic for years. Perhaps under the auspices of a secret court, and perhaps not. I believe Rob Graham’s assessment that the vast majority of intelligence personnel bend over backward to protect citizen’s rights. But it is still shocking to grasp the depth of our surveillance state. Still, as I mentioned above, I try not to worry about things I can’t control. So how did Edward Snowden pull off the leak? The NY Times has a great article about the gyrations required by reporters over a 6-month period to get the story. A Rubik’s Cube? Really? Snowden came clean, but they would have found him eventually – we always leave a trail. Another interesting link regarding the situation is how someone social engineered the hotel where Snowden was staying to get his room number and determine that he already checked out. If you want to be anonymous, probably beter not to use your real name, eh? – MR Present Tense: As someone who has been blogging on privacy for almost a decade, I am surprised by how vigorous public reaction has been to spying on US citizens via telecom carriers. When Congress and the senate granted immunity to telecoms for spying on users back in 2008, was it not obvious that Corporate entities are now the third party data harvester, and government

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.