Securosis

Research

Stupid FUD: Weird Nominum Interview

We see a lot of FUD on a daily basis here in the security industry, and it’s rarely worth blogging about. But for whatever reason this one managed to get under my skin. Nominum is a commercial DNS vendor that normally targets large enterprises and ISPs. Their DNS server software includes more features than the usual BIND installation, and was originally designed to run in high-assurance environments. From what I know, it’s a decent product. But that doesn’t excuse the stupid statements from one of their executives in this interview that’s been all over the interwebs the past couple days: Q: In the announcement for Nominum’s new Skye cloud DNS services, you say Skye ‘closes a key weakness in the internet’. What is that weakness? A: Freeware legacy DNS is the internet’s dirty little secret – and it’s not even little, it’s probably a big secret. Because if you think of all the places outside of where Nominum is today – whether it’s the majority of enterprise accounts or some of the smaller ISPs – they all have essentially been running freeware up until now. Given all the nasty things that have happened this year, freeware is a recipe for problems, and it’s just going to get worse. … Q: Are you talking about open-source software? A: Correct. So, whether it’s Eircom in Ireland or a Brazilian ISP that was attacked earlier this year, all of them were using some variant of freeware. Freeware is not akin to malware, but is opening up those customers to problems. … By virtue of something being open source, it has to be open to everybody to look into. I can’t keep secrets in there. But if I have a commercial-grade software product, then all of that is closed off, and so things are not visible to the hacker. … Nominum software was written 100 percent from the ground up, and by having software with source code that is not open for everybody to look at, it is inherently more secure. … I would respond to them by saying, just look at the facts over the past six months, at the number of vulnerabilities announced and the number of patches that had to made to Bind and freeware products. And Nominum has not had a single known vulnerability in its software. The word “bullsh**” comes to mind. Rather than going on a rant, I’ll merely include a couple of interesting reference points: Screenshot of a cross-site scripting vulnerability on the Nominum customer portal. Link to a security advisory in 2008. Gee, I guess it’s older than 6 months, but feel free to look at the record of DJBDNS, which wasn’t vulnerable to the DNS vuln. As for closed source commercial code having fewer vulnerabilities than open source, I refer you to everything from the recent SMB2 vulnerability, to pretty much every proprietary platform vs. FOSS in history. There are no statistics to support his position. Okay, maybe if you set the scale for 2 weeks. That might work, “over the past 2 weeks we have had far fewer vulnerabilities than any open source DNS implementation”. Their product and service are probably good (once they fix that XSS, and any others that are lurking), but what a load of garbage in that interview… Share:

Share:
Read Post

A Bit on the State of Security Metrics

Everyone in the security industry seems to agree that metrics are important, but we continually spin our wheels in circular debates on how to go about them. During one such email debate I sent the following. I think it does a reasonable job of encapsulating where we’re at: Until Skynet takes over, all decisions, with metrics or without, rely on human qualitative judgement. This is often true even for automated systems, since they rely on models and decision trees programmed by humans, reflecting the biases of the designer. This doesn’t mean we shouldn’t strive for better metrics. Metrics fall into two categories – objective/measurable (e.g., number of systems, number of attacks), and subjective (risk ratings). Both have their places. Smaller “units” of measurement tend to be more precise and accurate, but more difficult to collect and compile to make decisions… and at that point we tend to introduce more bias. For example, in Project Quant we came up with over 100 potential metrics to measure the costs of patch management, but collecting every one of them might cost more than your patching program. Thus we had to identify key metrics and rollups (bias) which also reduces accuracy and precision in calculating total costs. It’s always a trade-off (we’d love to do future studies to compare the results between using all metrics vs. key metrics to seeing if the deviation is material). Security is a complex system based on a combination of biological (people) and computing elements. Thus our ability to model will always have a degree of fuzziness. Heck, even doctors struggle to understand how a drug will affect a single individual (that’s why some people need medical attention 4 hours after taking the blue pill, but most don’t). We still need to strive for better security metrics and models. My personal opinion is that we waste far too much time on the fuzziest aspects of security (ALE, anyone?), instead of focusing on more constrained areas where we might be able to answer real questions. We’re trying to measure broad risk without building the foundations to determine which security controls we should be using in the first place. Share:

Share:
Read Post

Database Encryption Benchmarking

Database benchmarking is hard to do. Any of you who followed the performance testing wars of the early 90’s, or the adoption and abuse of TPC-C and TPC-D, know that the accuracy of database performance testing is a long-standing sore point. With database encryption the same questions of how to measure performance rear their heads. But in this case there are no standards. That’s not to say the issue is not important to customers – it is. You tell a customer encryption will reduce throughput by 10% or more, and your meeting is over. End of discussion. Just the fear of potential performance issues has hindered the adoption of database encryption. This is why it is incredibly important for all vendors who offer encryption for databases (OS/file system encryption vendors, drive manufacturers, database vendors, etc.) to be able to say their impact is below 10%. Or as far below that number as they can. I am not sure where it came from, but it’s a mythical number in database circles. Throughout my career I have been doing database performance testing of one variety or another. Some times I have come up with a set of tests that I thought exercised the full extent of the system. Other times I created test cases to demonstrate high and low water marks of system performance. Sometimes I captured network traffic at a customer site as a guide, so I could rerun those sessions against the database to get a better understanding of real world performance. But it is never a general use case – it’s only an approximation for that customer, approximating their applications. When testing database encryption performance several variables come into play: What queries? Do users issue 50 select statements for every insert? Are the queries a balance of select, inserts, and deletes? Do I run the queries back to back as fast as I can, like a batch job, or do I introduce latency between requests to simulate end users? Do I select queries that only run against encrypted data, or all data? Generally when testing database performance, with or without encryption, we select a couple different query profiles to see what types of queries cause problems. I know from experience I can create a test case that will drop throughput by 1%, and another that will drop it by 40% (I actually had a junior programmer unknowingly design and deploy a query with a cartesian that crashed our live server instantly, but that’s another story). What type of encryption? Full disk? Tablespace? Column Level? Table level? Media? Each variant has advantages and disadvantages. Even if they are all using AES-256, there will be differences in performance. Some have hardware acceleration; some limit the total amount of encrypted data by partitioning data sets; others must contend for thread, memory, and processor resources. Column level encryption encrypts less data than full disk encryption, so this should be an apples to apples comparison. But if the encrypted column is used as an index, it can have a significant impact on query performance, mitigating the advantage. What percentage of queries are against encrypted data? Many customers have the ability to partition sensitive data to a database or tablespace that is much smaller than the overall data set. This means that many queries do not require encryption and decryption, and the percentage impact of encryption on these systems is generally quite good. It is not unreasonable to see that encryption only impacts the entire database by 1% over the entire database in a 24 hour period, while reduces throughput against the encrypted tablespace by 25%. What is the load on the system? Encryption algorithms are CPU intensive. They take some memory as well, but it’s CPU you need to pay attention to. With two identical databases, you can get different performance results if the load on one system is high. Sounds obvious, I know, but this is not quite as apparent as you might think. For example, I have seen several cases where the impact of encryption on transactional throughput was a mere 5%, but the CPU load grew by 40%. If the CPUs on the system do not have sufficient ‘headroom’, you will slow down encryption, overall read/write requests (both encrypted and unencrypted), and the system’s transactional throughput over time. Encryption’s direct slowdown of encrypted traffic is quite different than its impact on overall system load and general responsiveness. How many cryptographic operations can be accomplished during a day is irrelevant. How many cryptographic operations can be accomplished under peak usage during the day is a more important indicator. Resource consumption is not linear, and as you get closer to the limits of the platform, performance and throughput degrade and a greater than linear rate. How do you know what impact database encryption will have on your database? You don’t. Not until you test. There is simply no way for any vendor of a product to provide a universal test case. You need to test with cases that represent your environment. You will see numbers published, and they may be accurate, but they seldom reflect your environment and are so are generally useless. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.