Securosis

Research

Predictions Galore: Analyst vs. Researchers

I normally make fun of predictions, but two sets issued this week are well worth the reading. The first come from Mike Rothman, who just issued his 2008 Security Incites. Mike mixes in both technical and general market trends. Some predictions are clearly measurable, and others are there just to make a point. Mike covers everything from metrics and audits, to NAC and DLP. On the other side are the more-technical predictions by Nate Lawson and Thomas Ptacek. These two researcher powerhouses range from digital watermarking and DRM, to NAC and new vulnerability classes. And let’s not forget Hoff’s double–sized predictions, and Stiennon’s. These aren’t the kinds of things will will drive your security spending (unless they come true), and plenty of predictions overlap or contradict each other. But the point is to get you thinking about the year to come, especially as you make tactical decisions. My predictions? I don’t really play that game, but if you aren’t looking towards better ways to protect yourself from web application attacks and clientside vulnerabilities, you’ll probably have a bad year. < p style=”text-align:right;font-size:10px;”>Technorati Tags: Mike Rothman, Nate Lawson, Security Predictions, Thomas Ptacek Share:

Share:
Read Post

Introduction To Database Encryption

Database encryption is like a home repair project- either it’s really easy and goes exactly as planned, or about five minutes in you realize you might not want to make any weekend plans for the next 2-3 years, and perhaps you should take a trip to the flower store before trying to explain why your family will be living with exposed wall studs and dangling wires for a while. Database encryption (and encryption in general) was one of the first technologies I covered when I first became an analyst. Early on I realized something didn’t smell right; I had vendors talking about using encryption to prevent attacks and to “enhance” access controls. But their products were completely linked to access controls, which didn’t really add any value. Also, most attacks against databases involve compromising user accounts or running queries within the privileges of the user, so how would encryption add any value? Encryption doesn’t do a darn thing against many SQL injection attacks or abuse by authorized users. This led to a lot of introspection and the eventual development of the Three Laws of Data Encryption. We can thus divide database encryption into two categories: Encryption for Separation of Duties: In this case we will almost always use encryption to protect against our own administrators or other privileged user access, since we can more easily and efficiently use access controls for everyone else. The example is encryption of credit card numbers, with the keys stored outside of the database, to allow stored numbers for credit card processing but to eliminate the possibility of administrators or users accessing the numbers. Encryption for Media Protection: Here we encrypt database objects (tables/columns), database files, or storage media to prevent exposure of information due to physical loss of the media. As you can imagine, encrypting for media protection is much easier than encryption for separation of duties, but it clearly doesn’t offer the same security benefits. Thus, the first thing we need to decide when looking at database encryption is what are we trying to protect against? If we’re just going after the PCI checkbox or are worried about losing data from swapping out hard drives, someone stealing the files off the server, or misplacing backup tapes, then encryption for media protection is our answer. I’ll discuss it more in a future post, but it’s a fairly straightforward process with manageable performance implications. If we want to encrypt for separation of duties, then life gets a little more complicated. Databases are complex beasts; far more complex than most people give them credit for. Just go try and teach yourself relational calculus or indexing. They like structured data, and once we start mucking with that by randomizing our data through encryption we start messing with performance. That’s not even counting the normal performance impact of encryption itself. As with encryption for media protection I’ll talk more specifically about encryption for separation of duties in future posts, but as a general rule of thumb it’s not overly difficult to build encryption into a new database, but if you are encrypting a legacy database accessed by applications (legacy or otherwise) you are sometimes looking at a 2-3 year project due to the required database and application changes. We run into problems with indices, range searches, referential integrity, application integration, connection pooling, key management, and … well, there’s a lot to talk about here. To close this post out, the first thing to look at when considering database encryption is what threat you are trying to protect against. If it’s loss of the database files and media, look towards media protection. If you want to limit regular user access, look to access controls or other internal database security features. If it’s separation of duties for discrete data (again, we’ll talk more later) then consider column/field encryption, and make sure you can store the keys outside of the database. As you’ve probably figured out by now, this is one of those multiple-post series things I like to do. In the next one we’ll talk about encryption for media protection and why you might want to combine it with database activity monitoring. After that, I’ll dig into field (or other object) encryption for separation of duties, then we’ll close with more detailed recommendations and a discussion of key management. BTW- I’m going in for some minor shoulder surgery on Monday which will slow me down for a little while. I’ll have some guest posts for next week, and should be back up and running fairly soon. < p style=”text-align:right;font-size:10px;”>Technorati Tags: Database encryption, Database Security, Encryption, Tutorial Share:

Share:
Read Post

Stupid Vendor FUD Of The Day

I’m sitting in a Starbucks in Vegas (on my EVDO card, not some risky open WiFi, of course) and nearly snort my coffee when I read the latest assault against reason by desperate vendors. (Via Slashdot, adding their own FUD). The title of the article is, “Encryption could make you more vulnerable, warn experts”. In short, a few vendors describe new “key management” attacks, where an attacker, should they steal the keys and lock you out, can hold your data hostage. However, experts from IBM Internet Security Systems, Juniper, nCipher and elsewhere said that data encryption also brings new risks, in particular via attacks – deliberate or accidental – on the key management infrastructure. … “Organizations experienced with encryption are standing back and saying this is potentially a nightmare. It is potentially bringing your business to a grinding halt.” Encryption is also as big an interest for the bad guys as the good guys, warned Anton Grashion, European security strategist for Juniper. “As soon as you let the cat out of the bag, they’ll be using it too,” he said. “For example, it looks like a great opportunity to start attacking key infrastructures.” “It’s a new class of DoS attack,” agreed Moulds. “If you can go in and revoke a key and then demand a ransom, it’s a fantastic way of attacking a business. Folks, I think we ALL agree that key management is important and needs to be secure. Does anyone see the need to create BS headlines about new kinds of attacks we’ve never once seen in practice? No? Not you in the back of the room? Good, I guess we’re all rational here. I realize we’ll never get rid of FUD in our industry and I use it myself from time to time, but if you’re so desperate you basically just make sh*t up, maybe you need to consider alternative marketing approaches. There are more than enough justifiable reasons to invest in appropriate key management. Josh Corman of IBM (full disclosure, I know Josh) offers a more reasonable risk: “One fear I have is that we’re all going to hide all our information, but companies are information-driven, so we take tactical decision and stifle ability to collaborate,” he said. Too bad he had to be quoted in this hack job. < p style=”text-align:right;font-size:10px;”>Technorati Tags: Encryption, Key Management, FUD Share:

Share:
Read Post

Ask Securosis: Is Common Criteria Certification Worth Anything?

This week’s question comes from Rob, who works for a security vendor. It’s one that comes up a lot on both the vendor and the end user sides. I recall that sometime before Xmas you said that only certifications greater than EAL 5 were worth anything, and that you would write about that later. … Can a mickey mouse protection profile, or the TOE chosen effect the end value of the cert, in your opinion ? I’ll be honest, I’m not the biggest fan of Common Criteria. For those of you who don’t pay attention to these sorts of things, Common Criteria is an international standard to certify security products (or security features). Wikipedia has a reasonable entry for more details. More specifically, it is a standard for specifying and evaluating the security assurance of computer products and systems. It is a core part of the certification and accreditation process used by government agencies. I don’t want to get into the nitty gritty details of Common Criteria, but basically you certify products at one of 7 different Evaluation Assurance Levels (EAL 1-7), with 1 being “functionally tested”, and 7 being “formally verified design and tested”. To avoid any more acronyms, you basically document your security functions (usually against a common protection profile), and then certify to the degree your product meets those requirements. And there’s the rub. First, the system doesn’t evaluate the security of a product- it is a certification as to security features matching their documentation, at least at EAL 1-4. At those levels it’s pretty much, “here’s a list of features, and assurance, from an outside lab that charged us WAY too much money, that our features meet those requirements.” When you see EAL 4+ it usually means some more advanced criteria were pulled in as part of the evaluation. Many MANY EAL 4+ products are just as full of holes and bugs as anything else. The functions documented work as advertised, but that’s about it. That’s what Rob was asking about the protection profile and the TOE (Target of Evaluation; what part of the product is tested). With a weak protection profile and limited TOE you can still achieve high assurance, since the scope of the evaluation is limited. It’s the same beef I have with those worthless SAS70 evaluations. At EAL 5-7 life is more interesting, at least here in the US. The NSA gets involved at that point and you come closer to certifying the entire security of the product and the development process. Very cool, and very time consuming and expensive. Very few products certify at 5+ because of the cost. There are other problems with CC, including keeping a product certified as it changes over time. My advice? As an end user, unless you’re in government where this is mandated, ignore Common Criteria. Instead, ask your vendor for documentation of their security development process and what tools they use to test the code, or any independent lab evaluations as to the security of the product (vulnerability analysis and testing). CC is essentially meaningless to you if it’s under 5. As a vendor, if you want to sell to the government you’ll have to pony up for an evaluation. Keep it as low as you can to reduce costs, but if you want to play with classified agencies you’re looking at a minimum of 4+, and probably higher. I expect comments on this one will be either non-existent, or very interesting… < p style=”text-align:right;font-size:10px;”>Technorati Tags: Common Criteria Share:

Share:
Read Post

Rob Graham Drops 5 Ton Anchor To Cut Undersea Cable

Wired reports that while repairing one of the undersea cables between the UAE and Oman they discovered it was cut by an abandoned anchor. If only it were that simple. The real truth, only available here, is that Rob Graham of Errata Security deliberately dropped the anchor on the cable to disprove the various conspiracy theories making their way around the net. Nice try Rob; we won’t fall for your talk of “cancer clusters” and “random coincidences”. We know better. Share:

Share:
Read Post

How Data Loss Prevention and Database Activity Monitoring Will Connect

There was a pretty good article over at eWeek today talking about the similarities and differences between DLP and DAM. It was kind of strange to read it, since I used to be the lead analyst covering those markets and I might have been the first person to use the DAM term. As I’ve discussed here before, I think information-centric security will evolve into two major stacks. DLP is the start of the Content Monitoring and Protection stack, while DAM is the start of the Application and Database Monitoring and Protection stack. We’ll have to see if CMP and ADMP survive as terms now that I’m not with a big analyst firm. Over time I’ll post more on how those stacks will evolve and what they’ll contain. Reading some of the comments on my last DAM post it’s clear that I still haven’t fully articulated this and need to write some papers on it. Today I’m going to skip ahead, thanks to the eWeek article, and discuss how the two sides will work together. I’ve come up with this division for a lot of reasons, mostly to do with buying centers, technology overlaps, business problems, and business and threat models. I have to start with a couple assertions. In the model I’m about to show, the CMP stack is embedded into the world of productivity applications and communications- including DRM applied at the time of information creation using content aware policies. Second, ADMP protects information in business applications and databases, and includes static data labeling (which could come from the DBMS) and can also apply on-the-fly labels using content analysis. CMP is for user-land (Office apps, email, etc.); ADMP is more data center oriented. What will happen is that rights/labels assigned in one stack with be passed to the other stack as information moves between the two. If I run an extract from a database that includes sensitive information, that extract is tagged as sensitive. If that data goes into an Excel spreadsheet, then a Word document, then a PDF, the rights are maintained through each stage, based on central policies. For example: I run a query from a customer database that includes social security numbers in the result. That data is labeled as sensitive, since the SSN column is labeled as sensitive. I extract that data to Excel. The extract is only allowed because Excel is integrated as an application that can apply DRM rights. The document in Excel instantly has mandatory DRM rights applied, based on central policies for that classification of data. We’ve now transitioned from ADMP to CMP. Those DRM rights are maintained through any subsequent movements of the information. Here’s an animation from a presentation I gave last week that shows what I mean. Click it at least 3 times to advance. This is just one example of how they’ll bridge, and yes, it sounds like science fiction. But all the components we need are well in development and you might see real-world examples sooner than you think. < p style=”text-align:right;font-size:10px;”>Technorati Tags: Application and Database Monitoring and Protection, Data Loss Prevention, Database Activity Monitoring, Database Security, Tools Share:

Share:
Read Post

Fifth Cable Down, Iran Offline, Coincidence Meter Drops

Update: Thanks to Windexh8er (who provides good information despite being far more inflammatory than he needs to, what’s up with that?) Iran is up and the traffic report is wrong. Another cable is down in the Middle East, and Iran is now offline. News stories indicate the cables are relatively new, and odds of simultaneous component failure are low. This can’t be seismic activity or we’d see other reports from scientists (kind of hard to hide earthquakes and volcanos these days). The odds are inching towards deliberate tampering, but I’m not going to go all crazy with conspiracy theories yet. There could still be other explanations. And no, I don’t think this is the CIA with black submarines. If we have that capability, which I’m sure we do, we wouldn’t blow it by screwing with cables during Super Bowl weekend just to annoy people. It’s too strategically important a capability to tip our hand without a compelling, immediate cause. < p style=”text-align:right;font-size:10px;”>Technorati Tags: Cyberattack Share:

Share:
Read Post

The DLP Guys Will Have A Field Day With This One

It seems that an attorney at Eli Lilly’s outside legal firm accidentally sent an email with confidential information over government settlement talks to a reporter at the New York Times. The Times reporter then started poking around, eventually breaking the story far before anyone was prepared. Oops. Did I mention it was a $1B settlement? Now before we get too excited, let’s keep in mind that even if Eli Lilly deployed DLP, it’s unlikely that their little outside law firm would. We also need to ask ourselves if any of their DLP policies would have prevented this type of leak, which will depend greatly on what was actually sent to the Times. Perhaps we should start by disabling autocomplete in our email applications first. I wonder what percentage of email leaks are merely the result of that little feature? Share:

Share:
Read Post

Most Amusing Security Breach Of The Week

Oops, over in England an HSBC branch forgot to lock the doors and turn on the alarm. A 5-year-old accidentally wandered in while his dad was using the ATM. Reading the article, the bank is trying to cover their asses with outright lies. My favorite line from the article? The Pettigrews stood guard at the bank until police officers arrived. I suspect someone might be in some remedial door-closing training right now. < p style=”text-align:right;font-size:10px;”>Technorati Tags: Physical security Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.