Securosis

Research

Akamai Implements WAF

Akamai announced that they are adding Web Application Firewall (WAF) capabilities into their distributed EdgePlatform netwok. I usually quote from the articles I reference, but there is simply too much posturing and fluffy marketing-ese about value propositions for me to extract an insightful fragment of information on what they are doing and why it is important, so I will paraphrase. In a nutshell they have ported ModSecurity onto/into the Akamai Edge Server. They are using the Core Rule Set to form the basis of their policy set. As content is pulled from the Akamai cache servers, the request is examined for XSS, SQL Injection, response splitting, and other injection attacks, as well as some error conditions indicative of tampering. Do I think this is a huge advancement to security? Not really. At least not at the outset. But I think it’s a good idea in the long run. Akamai edge servers are widely used by large commercial vendors and content providers, who are principal targets for many specific XSS attacks. In essence you are distributing Web Application Firewall rules, and enforcing as requests are made for the distributed/cached content. The ModSecurity policy set has been around for a long time and will provide basic protections, but it leaves quite a gap in meaningful coverage. Don’t get me wrong, the rule set covers many of the common attacks and they are proven to be effective. However, the value of a WAF is in the quality of the rule set, and how appropriate those rules are to the specific web application. Rule sets are really hard to get right, and must be updated with the same frequency as your web site content. As you add new pages or functions, you are adding and updating rules. I think the announcement is important, though, is because I believe it marks the beginning of a trend. We hear far too many complaints about WAF hindering applications, as well as the expense of rule set development and maintenance. The capability is valuable, but the coverage needs to get better, management needs to be easier, and the costs need to come down. I believe this is a model we will see more of because: Security is embedded into the service. With many ‘Cloud’ and SaaS offerings being offered, most with nebulous benefits, it’s clear that those who use Akamai are covered from the basic attacks, and the analysis is done on the Akamai network, so your servers remain largely unburdened. Just as with out-sourcing the processing overhead associated with anti-spam into the cloud, you are letting the cloud absorb the overhead of SQL Injection detection. And like Anti-virus, it’s only going to catch a subset of the attacks. Commoditization of WAF service. Let’s face it, SaaS and cloud models are more efficient because you commoditize a resource and then leverage the capability across a much larger number of customers. WAF rules are hard to set up, so if I can leverage attack knowledge across hundreds or thousands of sites, the cost goes down. We are not quite there yet, but the possibility of relieving your organization from needing these skills in-house is very attractive for the SME segment. The SME segment is not really using Akamai EdgeServers, so what I am talking about is generic WAF in the cloud, but the model fits really well with outsourced and managed service models. Specific, tailored WAF rules will be the add-on service for those who choose not to build defenses into the web application or maintain their own WAF. The knowledge that Akamai can gather and return to WAF & web security vendors provides invaluable analysis on emerging attacks. The statistics, trend data, and metrics they have access to offer security researchers a wealth of information – which can be leveraged to thwart specific attacks and augment firewall rules. So this first baby step is not all that exciting, but I think it’s a logical progression for WAF service in the cloud, and one we will see a lot more of. Share:

Share:
Read Post

MacBook Holiday Sales Report

This is my MacBook sale progress report. For those of you who have not followed my tweets on the subject, I listed my MacBook for sale on Craigslist. After Bruce Schneier’s eye-opening and yet somehow humorous report on selling his laptop on eBay, I figured I would shoot for a face to face sale. I chose Craigslist in Phoenix and specified a cash-only sale. The results have been less than impressive. The first time I listed the laptop: Scammers: 6 Phishers: 2 Tire Kickers: 1 Real Buyers: 0 The second time I listed the laptop: Scammers: 5 Phishers: 4 Pranksters: 1 Tire Kickers: 1 Real Buyers: 0 I consider them scammers, as the people who responded in all but one case wanted shipment to Africa. It was remarkably consistent. The remaining ‘buyer’ claimed to be in San Jose, but felt compelled to share some sob story about a relative with failing health in Africa. I figured that was a precursor to asking me to ship overseas. When I said I would be happy to deliver to their doorstep for cash, they never responded. The prankster wanted me to meet him in a very public place and assured me he would bring cash, but was just trying to get me to drive 30 miles away. I asked a half dozen times for a phone call to confirm, which stopped communications cold. I figure this is kind of like crank calling for the 21st century. A few years ago I saw a presentation by eBay’s CISO, Dave Cullinane. He stated that on any given day, 10% of eBay users would take advantage of another eBay user if the opportunity presented itself, and about 2% were actively engaged in finding ways to defraud other eBay members. Given the vast number of global users eBay has, I think that is a pretty good sample size, and probably an accurate representation of human behavior. I would bet that when it comes to high dollar items that can be quickly exchanged for cash, the percentage of incidents rises dramatically. In my results, 55% of responses were active scams. I would love to know what percentages eBay sees with laptop sales. Is it the malicious 2% screwing around with over 50% of the laptop sales? I am making an assumption that it’s a small group of people engaged in this behavior, given the consistency of the pitches, and that my numbers on Craigslist are not that dissimilar from eBay’s. A small group of people can totally screw up an entire market, as the people I speak with are now donating stuff for the tax writeoff rather than deal with the detritus. Granted, it is easier for an individual to screen for fraudsters with Craigslist, but eBay seems to do a pretty good job. Regardless, at some point the hassle simply outweighs the couple hundred bucks you’d get from the sale. Safe shopping and happy holidays! Share:

Share:
Read Post

Friday Summary – December 11, 2009

I have had friends and family in town over the last eight days. Some of them wanted the ‘Arizona Experience’, so we did the usual: Sedona, Pinnacle Peak Steak House, Cave Creek, a Cardinals game, and a few other local attractions. Part of the tour was the big Crossroads Gun Show out at the fairgrounds. It was the first time I had been to such a show in 9 or 10 years. Speaking with merchants, listening to their sales pitches, and overhearing discussions around the fairgrounds, everything was centered on security. Personal security. Family security. Home security. Security when they travel. They talk about preparedness and they are planning for many possibilities: everything from burglars to Armageddon. Some events they plan for have small statistical probability, while others border on the fantastic. Still, the attendees were there to do more than just speculate and engage in idle talk – they train, plan, meet with peers, and prepare for they threats they perceive. I don’t want this to devolve into a whole gun control discussion, and I am not labeling any group – that is not my point. What you view as a threat, and to what lengths you are willing to go, provides an illuminating contrast between data security and physical security. Each discussion I engaged in had a very personal aspect to it. I don’t know any data security professionals that honestly sit up at night thinking about how to prepare for new threats or what might happen. For them, it’s a job. Some research late into the night and hack to learn, but it’s not the same thing. As data security professionals, short of a handful of people in capture the flag tournaments at Black Hat, the same level of dedication is not there. Then again, generally no one dies if your firewall fails. For each of the dozen or so individuals I spoke with, their actions were an odd blend of intellect and paranoia. How much planning was a product of their imagination and resources. Are they any more secure than other segments of the population? Do their cars get stolen any less, or are their homes any safer? I have no idea. But on one level I admired them for their sharing of knowledge amongst peers. For thinking about how they might be vulnerable, planning how to address the vulnerabilities, and training for a response. On the other hand I just could not get out of my head that the risk model is out of whack. The ultimate risk may be greater, but you just cannot throw probability out the window. Perhaps with personal safety it is easier to get excited about security, as opposed to the more abstract concepts of personal privacy or security of electronic funds. Regardless, the experience was eye opening. On a totally different subject, we notice we have been getting some great comments from readers lately. We really appreciate this! The comments are diverse and enlightening, and often contribute just as much to the community as the original posts. We make a point of listing those who contribute to white paper development and highlighting interesting comments from week to week, but we have been looking for a more concrete way of acknowledging these external contributors for a while know. To show our appreciation, Rich, myself and the rest of the Securosis team have decided that we are giving a $25 donation to Hackers for Charity (HFC) in the name of whoever drops the best comment each week. Make sure you check out the “Blog Comment of the Week”! On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Chris explains What is Google Voice? over at Macworld. David Mortman on Data Not Assertions over at the New School. Rich was part of the Black Hat Virtual Event. Rich was quoted on Bit.ly in The Tech Herald. Rich on the Network Security Podcast. Adrian in Information Security Magazine’s December issue on Basic Database Security. While not directly Securosis related, the RSA Security Blogger’s Meetup is on. Favorite Securosis Posts Rich: David Mortman’s Changing the Game? post is now up to 37 comments. I’m voting for the entire thread, not just the original post. Adrian: Meier’s DNS Resolvers and You post. Mort: Rich’s post on Possibility is not Probability. Meier: In Violent Agreement. Other Securosis Posts Verizon 2009 DBIR Supplement Security Controls vs. Outcomes Class Action Against Express Scripts Dismissed Project Quant for Databases: Project Quant: Database Security Planning, Part 2 (part 4 overall) Project Quant: Database Security Planning (part 3 overall) Favorite Outside Posts Rich: This isn’t my “favorite” post, but it’s probably the single most important thing you need to read on the Internet this week. Eric Schmidt, Google’s CEO, says you only need to worry about privacy if you’re doing something bad. I guess when they say, “Do no evil” they’re talking to us… with an “or else!” at the end. Adrian: Spire Security: Should we change passwords every 90 days? Chris: WPA Cracker: $17 or $34 to check a sniffed WPA(2) password against Moxie’s list. It’s a steal! Top News and Posts Hackers in the cloud! And not the ones on planes. Facebook Changes Privacy UI (and maybe reduces privacy). The Totally Awesome Frequent Flier/US Mint Loophole Put this in the category of “things I wish I had thought of”. Ending the PCI Blame Game Mike Bailey puts XSS into perspective. Amrit’s totally snarky (yet amusing) holiday gift guide Blog Comment of the Week We are going to do something a little different this week … both because we had so many excellent comments, and because we are launching the Hackers for Charity contributions. This week we have three winners! Chris Hayes in response to Mortman asking for a FAIR analysis in comments on Changing The Game ? @Mortman. Interesting request. A FAIR analysis can be used to demonstrate variance in resistance strength (formerly referred to as “control strength”). A FAIR analysis is usually done for a unique scenario. For example, password frequency change for an Internet facing app – where access to a small amount of confidential information is possible. A system password policy that requires complexity,

Share:
Read Post

Verizon 2009 DBIR Supplement

Today Verizon released their Supplement to the 2009 Data Breach Investigations Report. As with previous reports, it is extremely well written, densely loaded with data, and an absolute must read. The bulk of the report gives significantly more information on the breakdown of attacks, by both how often attacks occurred, and how many records were lost as a result of each attack. While the above is fascinating, where things got most interesting was in the appendix, which was all about comparing the Verizon data set from 2004 through 2008 to the DataLossDB archives from 2000-2009. One of the big outstanding questions from past Verizon reports was how biased is the Verizon dataset, and thus how well does it reflect the world at large? While there was some overlap with the DataLossDB, their dataset is significantly larger (2,300+ events). Verizon discovered a fairly high level of correlation between the two data sets. (Page 25, Table 4). This is huge, because it allows us to start extrapolating about the world at large and what attacks might look like to other organizations. The great thing about having so much data is that we can now start to prioritize how we implement controls and processes. Case in point: Table 5 on page 26. We once again see that the vast vast majority (over 70%!) of incidents are from outsiders. This tells us that’s where protection should be focused first. If you go back to the body of the supplement and start looking at the details, you can start to re-evaluate your current program and re-prioritize appropriately. Share:

Share:
Read Post

Security Controls vs. Outcomes

One of the more difficult aspects of medical research is correlating treatments/actions with outcomes. This is a core principle of science based medicine (if you’ve never worked in the medical field, you might be shocked at the lack of science at the practitioner level). When performing medical studies the results aren’t always clean cut. There are practical and ethical limits to how certain studies can be performed, and organisms like people are so complex, living in an uncontrolled environment, that results are rarely cut and dried. Three categories of studies are: Pre-clinical/biological: lab research on cells, animals, or other subsystems to test the basic science. For example, exposing a single cell to a drug to assess the response. Experimental/clinical: a broad classification for studies where treatments are tested on patients with control groups, specific monitoring criteria, and attempts to control and monitor for environmental effects. The classic double blind study is an example. Observational studies: observing, without testing specific treatments. For example, observational studies show that autism rates have not increased over time by measuring autism rates of different age groups using a single diagnostic criteria. With rates holding steady at 1% for all living age groups, the conclusion is that while there is a perception of increasing autism, at most it’s an increase in diagnosis rates, likely due to greater awareness and testing for autism. No single class of study is typically definitive, so much of medicine is based on correlating multiple studies to draw conclusions. A drug that works in the lab might not work in a clinical study, or one showing positive results in a clinical study might fail to show desired long-term outcomes. For example, the press was recently full of stories that the latest research showed little to no improvement in long-term patent outcomes due to routine mammograms for patients without risk factors before the age of 50. When studies focus on the effectiveness of mammograms detecting early tumors, they show positive results. But these results do not correlate with improvements in long-term patient outcomes. Touchy stuff, but there are many studies all over medicine and other areas of science where positive research results don’t necessarily correlate with positive outcomes. We face the same situation with security, and the recent debate over password rotation highlights (see a post here at Securosis, Russell Thomas’s more-detailed analysis, and Pete Lindstrom’s take). Read through the comments and you will see that we have good tools to measure how easy or hard it is to crack a password based on how it was encrypted/hashed, length, use of dictionary words, and so on, but none of those necessarily predict or correlate with outcomes. None of that research answers the question, “How often does 90 day password rotation prevent an incident, or in what percentage of incidents did lack of password rotation lead to exploitation?” Technically, even those questions don’t relate to outcomes, since we aren’t assessing the damage associated with the exploitation (due to the lack of password rotation), which is what we’d all really like to know. When evaluating security, I think wherever possible we should focus on correlating, to the best of our ability, security controls with outcomes. Studies like the Verizon Data Breach Report are starting to improve our ability to draw these conclusions and make more informed risk assessments. This isn’t one of those “you’re doing it wrong” posts. I believe that we have generally lacked the right data to take this approach, but that’s quickly changing, and we should take full advantage of the opportunity. Share:

Share:
Read Post

DNS Resolvers and You

As you are already well aware (if not, see the announcement – we’ll wait), Google is now offering a free DNS resolver service. Before we get into the players, though, let’s first understand the reasons to use one of these free services. You’re obviously reading this blog post, and to get here your computer or upstream DNS cache resolved securosis.com to 209.240.81.67 – as long as that works, what’s the big deal? Why change anything? Most of you are probably reading this on a computer that dynamically obtains its IP address from the network you’re plugged into. It could be at work, home, or a Starbucks filled with entirely too much Christmas junk. Aside from assigning your own network address, whatever router you are connecting to also tells you where to look up addresses, so you can convert securosis.com to the actual IP address of the server. You never have to configuring your DNS resolver, but can rely on whatever the upstream router (or other DHCP server) tells you to use. For the most part this is fine, but there’s nothing that says the DNS resolver has to be accurate, and if it’s hacked it could be malicious. It might also be slow, unreliable, or vulnerable to certain kinds of attacks. Some resolvers actively mess with your traffic, such as ISPs that return a search pages filled with advertisements whenever you type in a bad address, instead of the expected error. If you’re on the road, your DNS resolver is normally assigned by whatever network you’re plugged into. At home, it’s your home router, which gets its upstream resolver from your ISP. At work, it’s… work. Work networks are generally safe, but aside from the reliability issues we know that home ISPs and public networks are prime targets for DNS attacks. Thus there are security, reliability, performance, and even privacy advantages to using a trustworthy service. Each of the more notable free providers cites its own advantages, along the lines of: Cache/speed – In this case a large cache should equate to a fast lookup. Since DNS is hierarchical in naturem if the immediate cache you’re asking to resolve a name already has the record you want, there is less wait to get the answer back. Maintaining the relevance and accuracy of this cache is part of what separates a good fast DNS service from, say, the not-very-well-maintained-DNS-service-from-your-ISP. Believe it or not, but depending on your ISP, a faster resolver might noticeably speed up your web browsing. Anycast/efficiency – This gets down into the network architecture weeds, but at a high level it means that when I am in Minnesota, traffic I send to a certain special IP address may end up at a server in Chicago, while traffic from Oregon to that same address may go to a server in California instead. Anycast is often used in DNS to provide faster lookups based on geolocation, user density, or any other metrics the network engineers choose, to improve speed and efficiency. Security – Since DNS is susceptible to many different attacks, it’s a common attack vector for things like create a denial-of-service on a domain name, or poisoning DNS results so users of a service (domain name) are redirected to a malicious site instead. There are many attacks, but the point is that if a vendor focuses on DNS as a service, they have probably invested more time and effort into protecting it than an ISP who regards DNS as simply a minor cost of doing business. These are just a few reasons you might want to switch to a dedicated DNS resolver. While there are a bunch of them out there, here are three major services, each offering something slightly different: OpenDNS: One of the most full featured DNS resolution services, OpenDNS offers multiple plans to suit your needs – basic is free. The thing that sets OpenDNS apart from the others is their dashboard, from which you can change how the service responds to your networks. This adds flexibility, with the ability to enable and disable features such as content filtering, phishing/botnet/malware protection, reporting, logging, and personalized shortcuts. This enables DNS to serve as a security feature, as the resolver can redirect you someplace safe if you enter the wrong address; you can also filter content in different categories. The one thing that OpenDNS often gets a bad rap for, however, is DNS redirection on non-existent domains. Like many ISPs, OpenDNS treats every failed lookup as an opportunity to redirect you to a search page with advertisements. Since many other applications (Twitter clients, Skype, VPN, online gaming, etc…) use DNS, if you are using OpenDNS with the standard configuration you could potentially leak login credentials to the network, as a bad request will fail to get back a standard NXDOMAIN response. This can result in sending authentication credentials to OpenDNS, as your confused client software sees the response as a successful NOERROR and proceeds, rather than aborting as it would if it got back the ‘proper’ NXDOMAIN. You can disable this behavior, but doing so forfeits some of the advertised features that rely on it. OpenDNS is a great option for home users who want all the free security protection they can get, as well as for organizations interested in outsourcing DNS security and gaining a level of control and insight that might otherwise be available only through on-site hardware. Until your kid figures out how to set up their own DNS, you can use it to keep them from visiting porn sites. Not that your kid would ever do that. DNSResolvers: A simple no-frills DNS resolution service. All they do is resolve addresses – no filtering, redirection, or other games. This straight up DNS resolution service also won’t filter for security (phishing/botnet/malware). DNSResolvers is a great fast service for people who want well-maintained resolvers and are handling security themselves. DNSResolvers effectively serves as an ad demonstrating the competence and usefulness of parent company easyDNS), by providing a great free DNS service, which encourages some users

Share:
Read Post

Possibility is not Probability

On Friday I asked a simple question over Twitter and then let myself get dragged into a rat-hole of a debate that had people pulling out popcorn and checking the latest odds in Vegas. (Not the odds on who would win – that was clear – but rather on the potential for real bloodshed). And while the debate strayed from my original question, it highlighted a major problem we often have in the security industry (and probably the rest of life, but I’m not qualified to talk about that). A common logical fallacy is to assume that a possibility is a probability. That because something can happen, it will happen. It’s as if we tend to forget that the likelihood something will happen (under the circumstances in question) is essential to the risk equation – be it quantitative, qualitative, or whatever. Throughout the security industry we continually burn our intellectual capital by emphasizing low-probability events. “Mac malware might happen so all Mac users should buy antivirus or they’re smug and complacent”. Forgetting the fact that the odds of an average Mac user being infected by any type of malware are so low as to be unmeasurable, and lower than their system breaking due to problems with AV software. Sure, it might change. It will probably change; but we can’t predict that with any certainty and until then our response should match the actual (current) risk. Bluetooth attacks are another example. Possible? Sure. Probable? Not unless you’re at a security or hacker conference. There are times, especially during scenario planning, to assume that anything that can happen will happen. But when designing your actual security we can’t equate all threats. Possible isn’t probable. The mere possibility of something is rarely a good reason to make a security investment. Share:

Share:
Read Post

In Violent Agreement

My Friday post generated some great discussion in the comments. I encourage you to go back and read through them. Rocky in particular wrote an extended comment that should be a blog post in itself which reveals that he and I are, in fact, in violent agreement on the issues. Case in point, his first paragraph: I think we’re on the same page. As an industry we need to communicate more clearly. It wasn’t my intent to fault any information professionals as much as I’m hoping that we all will push a bit harder for the right conversations in the future. We can’t just let the business make poor decisions anymore, we need to learn their language and engage them in more meaningful dialogue. We’re yelling in the wrong language. We just need to put that effort into learning their language and communicating more effectively. How is it that we can read HEX in real time but can’t converse with a MBA at any time? Read the last sentence again. It is that important. This is something I’ve been fighting for for a long time. It’s not about bits and bytes and until we get that through our heads, the rest just doesn’t matter because no one in command will listen to us. Rocky closed out his comment with this though: What would IT security look like if we spent as much time on those thoughts as we do on compliance tools, dashboards and monitoring? I think it’d be much more business centric and hopefully significantly more respected in the C-suite. What do you think? Share:

Share:
Read Post

Class Action Against Express Scripts Dismissed

Jaikumar Vijayam has posted an article at ComputerWorld regarding the Express Scripts Data Breach class action suit. This is the case where, in 2008, Express Scripts received a letter demanding money from the company under the threat of exposing records of millions of patients. The letter included personal information on people covered by Express Scripts, including birth dates, Social Security numbers and prescription information. Many of the insured were seeking damages, and the judge has thrown the case out citing lack of evidence. Without any actual harm being done, there can be no damages sought. To me, this means that privacy is worthless. “Abstract injury is not enough to demonstrate injury-infact,” Judge Buckles wrote. “The injury or threat of injury must be concrete and particularized, actual and imminent; not conjectural or hypothetical.” and … “Plaintiff alleges that he would be injured “if” his personal information was compromised, and “if” such information was obtained by an unauthorized third party, and “if” his identity was stolen as a result, and “if” the use of his stolen identity caused him harm.” These multiple “if’s” put his claims in the realm of the hypothetical, Judge Buckles noted. I get the argument. And I get that laws don’t protect our feelings. But Express Scripts has been entrusted with the data, and they earn revenue from having this data, which means they inherit the custodial responsibility for the security and privacy of that information. Not being able to quantify damages should not be considered the same as not being damaged. Should the burden of proof on this point fall on the person who had their information stolen? Considered in light of credit card processors, health insurers, 3rd party service providers, and law enforcement not sharing information about breach specifics, it will be neigh on impossible for average citizens to gather information necessary to demonstrate the chain of events that led to damages. Damages and costs come in many forms, most of which are not fully quantifiable, so it becomes a quagmire. This sets a bad precedent, IMO, and does not promote or incentivize companies to secure data. When it gets bad enough, consumers will push for legislation to curb the behavior, and we have seen how that works out. Share:

Share:
Read Post

Changing The Game?

Rocky DeStefano had a great post today on FudSec, Liberate Yourself: Change The Game To Suit Your Needs, which you should read if you haven’t already. It nicely highlights many of the issues going on in the industry today. However, I just can’t agree with all of his assertions. In particular, he had two statements that really bothered me. Information Security Leadership. We need to start pushing back at all levels here. It’s my opinion that business’s need to care much less about being compliant and more about being fundamentally secure – or if you prefer having better visibility into real risk. Risk to the mission, risk to the business not the risk to an asset. We continue to create irrelevant measurements – irrelevant because they are point in time, against a less-than secure model and on a playing field that is skewed towards the success of our adversary. In a perfect, security and risk oriented world, I would agree with this 100%. The problem is, that from the business perspective, what they have in place is usually sufficient to do what they need to do safely. I’m a big fan of using risk, because it’s the language that the business uses, but this isn’t really a compliance versus security vs risk issue. What needs to be communicated more effectively is what compliance to the letter of the law does and doesn’t get you. Where we have failed as practitioners is in making this distinction and allowing vendor and marketing BS to convince business folks that because they are compliant they are of course secure. I can’t count the number of times I’ve had folks tell me that they thought being compliant with whatever regulation meant they were secure. Why? Because that’s the bill of sale they were sold. And until we can change this basic perception the rest seems irrelevant. Don’t blame the security practitioners; most of the ones I know clearly express the difference between compliance and security, but it often falls on deaf ears. But what really got my goat was this next section: As information security professionals how on earth did we let the primary financial driver for security spending be compliance initiatives? We sold our souls because we lacked the knowledge of the business and how to apply what we do in a meaningful way to the business. We let compliance initiatives that promised “measurable” results have their way because we thought we could tag along for the ride and implement best possible solutions given the situation. As I see it we are no better off for this and now our teams have either competing agendas or more work to drive us away from protecting our organizations. Sure we’ve created some “building codes” but do “point in time” snapshots matter anymore when the attacker can mold his approach on a whim? I don’t know who Rocky has been talking to, but I don’t know a single security practitioner who thinks that compliance was the way to go. What I’ve seen are two general schools of thought. One is to rant and rave that everyone is doing it wrong and that compliance doesn’t equal security, but then engages in the compliance efforts because they have no choice. The other school is to be pragmatic and to accept that compliance is here to stay, and do our best within the existing framework. It’s not like we as an industry ‘let’ compliance happen. Even the small group of folks who have managed to communicate well with the business, be proactive, and build a mature program still have to deal with compliance. As for Rocky’s “buildng codes” and “point in time” snapshots, for a huge segment of the business world, this is a massive step up from what they had before. But to answer Rocky’s question, the failure here is that we told the business, repeatedly, that if they installed this one silver bullet (firewalls, AV, IDS, and let’s not forget PKI) they’d be secure. And you know what? They believed us, every single time, they shelled out the bucks and we came back for more, like Bullwinkle the Moose “This time for sure!” We told them the sky was going to fall and it didn’t. We FUDed our way around the business, we were arrogant and we were wrong. This wasn’t about selling our souls to compliance. It was about getting our asses handed to us because we were too busy promoting “the right way to do things” and telling the business no rather then trying to enable them to achieve their goals. Want an example? Show me any reasonable evidence that changing all your users’ passwords every 90 days reduces your risk of being exploited. No wonder they don’t always listen to us. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.