Securosis

Research

Encryption: The Maginot Line of Data Security

History is a funny thing. It’s amazing that what many children see in early schooling as a boring collection of facts is neither boring nor factual. On a good day we might get some dates correct, but there isn’t a “fact” in history that isn’t open to interpretation. This is as it should be; think about all the factors that went into a major life decision- say a marriage or picking your college. Now distill everything involved in that decision into a paragraph, stick it in a drawer for a couple decades, pull it out, and see if it still matches your memories and accurately reflects the situation. If you don’t have a few decades to spare, the answer is, “it doesn’t.” The main problems with history are actually those we see in computer science- bandwidth, compression, indexing, and search. We can’t possibly collect and store all the bandwidth of human interaction, so we drop into “sampling mode” and further compress it for long-term storage. We then rely on imperfect indexing to organize the data, and flawed search protocols to find what we need. We don’t collect everything, lose large amounts of data in compression, poorly index it, and rely on primitive search tools. No wonder history is open to interpretation. Take the Maginot Line. And Encryption. For those of you who aren’t military history buffs, the Maginot Line was a series of interlocking defenses, sometimes 25 kilometers deep, that the French built after WWI to keep the Germans out. In popular security culture the term is often used as an analogy to describe a misguided investment designed to fight the last war that’s easily circumvented. In marketing films of the time the Maginot Line was promoted as being an invincible defense for France. A folly painfully realized when the German invasion succeeded in only a month. A metaphor for a failure of hubris. Reality is, of course, open to interpretation. Another interpretation of the Maginot Line is that it completely succeeded in its defined task, preventing a frontal assault along the Franco-German border. The Maginot Line held, but the other defensive layers- the Ardennes and the French Army along the Belgian border- failed. The Maginot Line was designed for a mission it effectively met, but other design flaws in the defense in depth of France lead to the German occupation. Which brings us to encryption. The first version of the PCI Data Security Standard called encryption, “the ultimate data security technology”. Wrong. Encryption is a powerful technology, but probably the most-misunderstood in the context of what it provides for data security. With the McAfee acquisition of SafeBoot for $350M, encryption is in the headlines again. A while ago I wrote the Three Laws of Data Encryption to help users get the most value out of encryption. I really do think of encryption as the Maginot Line of data security. It’s powerful, nigh invincible, if used correctly, but easily circumvented if your other security controls aren’t properly designed. For example, if you have a large application connected to a large database full of encrypted credit card numbers, and that application is subject to SQL injection, odds are your encryption is worthless. Laptop encryption protects you from stolen laptops, but is useless against malicious software running in the context of the user. As I keep walking through the Data Security Lifecycle you’ll see a lot of posts on encryption; it’s a fundamental technology for protecting content. But when big companies start throwing around hundreds of millions of dollars I think it’s an opportune time to step back and remind ourselves of the problem we’re trying to solve, and how the different parts of the solution fit together. If we want a real-world example we need to look no further than TJX. Rumor has it that cardholder data was encrypted, but the attackers sniffed an unencrypted portion of the communications to perform transactions. The encryption worked perfectly, but the breach still succeeded. Share:

Share:
Read Post

Some Answers for Jeremiah: Website Vulnerabilities

Jeremiah posted these questions on dealing with website vulnerabilities. Here are my quick answers (I have to run- sorry for the lack of links, but you can Google the examples): Lets assume a company is informed of a SQLi or XSS vulnerability in their website (I know, shocker) either privately or via public disclosure on sla.ckers.org. And that vulnerability potentially places private personal information (PPI) or intellectual property at risk of compromise. My questions are: 1) Is the company “legally” obligated to fix the issue or can they just accept the risk? Think SOX, GLBA, HIPAA, PCI-DSS, etc. Definitely no for intellectual property. Definitely no for SOX- SOX says you’re free to make as many dumb mistakes and lose as much money as you want, as long as you report it accurately. Other laws are a toss-up, but generally there is no obligation unless there is evidence that a breach occurred. For PCI-DSS you have to remediate or document compensating controls for any network vulnerabilities at the time of your audit (and this expands to applications with 1.1), but there is no definitive requirement for immediate remediation. California AB1950 is the big question mark in this area and I’m unsure on enforcement mechanisms. The regulations are very unclear and unhelpful here, and it’s quite likely a company can accept the risk. But if a breach occurs, they may be held negligent. Take a look at the PetCo case where the FTC mandated a security program after a breach, and Microsoft/MSN. The companies were held liable for losing customer data, but not because of any of the usual regulations. There is almost no case law that I’m aware of. 2) What if repairs require a significant time/money investment? Is there a resolution grace period, does the company have to install compensating controls, or must they shutdown the website while repairs are made? No. Most regulations only require breach notification or remediation of flaws discovered through auditing. Reasonable person theory probably applies if there is a breach with losses and it goes to court. I’ve read all of the regulations- none mention a specific time period. 3) Should an incident occur exploiting the aforementioned vulnerability, does the company bear any additional legal liability? They may carry liability due to negligence. See the cases I mentioned above. 4) If the company’s website is PCI-DSS certified, is the website still be considered certified after the point of disclosure given what the web application security sections dictate? Unknown because there are no public cases that I can find. I believe you remain certified until the next audit. In the case of Cardsystems, they were PCI certified when the breach occurred and immediately re-audited and de-certified following public disclosure of the breach. That’s one problem with PCI-DSS- it’s very audit-reliant and changes between audits don’t directly affect certification. 5) Does the QSA or ASV who certified the website potentially risk any PCI Council disciplinary action for certifying a non-compliant website? What happens if this becomes a pattern? No known cases of disciplinary action, but an audit insider might know of one. Disciplinary action will most likely only take place if the audit failed to follow best practices and a large breach occurs, or if there is (as you mention) a pattern. None of this is formalized to my knowledge. I’ve spent a lot of time researching and discussing all the various data protection and breach disclosure regulations. Organizations generally only face potential liability if they either falsify documentation for auditing or certification, or suffer a breach and are later shown to be negligent. I am unaware of legal enforcement mechanisms if there is a known vulnerability, but no definitively unapproved disclosure of information. This is an inherent risk of audit-based approaches to data protection. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.