Securosis

Research

Network Security Podcast: Episode 80

Once again Martin and I recorded late enough in the day that I could enjoy a fine beer during the taping (Moose Drool this week). I also need to shout out to Paul and Larry and Pauldotcom Security Weekly; based on their advice I picked up a WRTSL54GS for some wireless access point hacking. Too bad I bricked it… by opening the box. Needless to say that one is on its way back to the online store, and a new one is headed to me. I’ve been working on this pet project of mine for a year and really hope this is the right box to get the job done. Also, congrats to Martin on re-entering the world of the gainfully employed. He starts with Trustwave on Tuesday. Show Notes: Microsoft AutoRuns PGP Flaw not really a flaw at all Securosis: Slashdot bias and much ado about nothing PGP encryption issue Slashdot: Undocumented bypass in PGP whole disk encryption Securology: PGP whole disk encryption – barely acknowledged intentional bypass Retailers vs. PCI Securosis: Retailers btch slap PCI Security Standards Council Techtarget: National Retail Federation takes aim at PCI DSS Council SC Magazine: Retail Lobby offers alternative to PCI standards Network Security Blog: Merchants mad about credit card retention iPhone Jailbreak (missed the link on this one) Suit against Apple for bricking iPhones Six ticks to Midnight: One plausible journey from here to a total surveillance society Tech Liberation Front Onstar to stop supports RSA Speaking on Security interviews Shon Harris, and I get a mention too. CIO.com: Hacker Economics 1: Malware as a service Tonight’s Music: The Moon is Full by Albert Colins, Johnny Copeland and Robert Cray Network Security Podcast, Episode 80, October 9, 2007 Time: 46:51 Share:

Share:
Read Post

Everything You Need To Know About Security And Risk Is In This Post (Humor)

Meerkat Manor, via the Guerilla CISO. Here’s an excerpt: 09 October 2007: Dear diary, I drew sentry duty for the third day this week. I know it’s my solemn duty to protect the clan, but my risk assessment has determined that, although a predator is a high-impact event, it is a low rate-of-occurance activity and so I think a better use of my time is in foraging for stray eggs. Besides, if the predators come and eat us all, it’s not like I’ll have to face the Meerkat Manor Board of Directors. 10 October 2007: Dear diary, I grow tired of the incessant looking for predators. I mean, why do us meerkats focus exclusively on detective controls which use up to 15% of our available manpower when we could just as easily reduce the sentries to 5% of our efforts and put in place corrective controls such as trap holes and punji sticks to reduce the threats to our home? The true cost savings is that the effort for corrective controls is a one-time installation where sentry duty is a recurring bill. Didn”t the alpha-pair learn anything in their Masters in Meerkat Administration classes? 11 October 2007: Dear diary, today I instituted a metrics program to gauge the effectiveness of our sentry program and to determine if we are getting the best level of risk for the time that we are investing. So far, I”ve made a bar chart to analyze the total number of predator alerts versus the total number of predator intrusions. I think I have a business case to slowly reduce the ratio of sentries to foragers during the day. Share:

Share:
Read Post

The Five Problems With Data Classification, And Introduction To Practical Data Classification

Data classification is one of the most essential tools of data security. It enables us to leverage business priorities into technical and physical controls over the management and protection of data. Applying data security controls without data classification is like trying to protect a pile of cash in an open field filled with piles of leaves by air dropping concrete barricades from 10,000 feet. At night. It’s also hard. Really hard. So hard that outside of a few companies in a few industries, mostly financial services, energy production, military/intelligence, and some manufacturing, I’m not sure I’ve ever seen someone with a useful and effective classification program. I’ve talked with hundreds, possibly thousands, of organizations struggling with data classification. Some give up, others blow wads of cash on consultants that don’t really give them what they want, and others have a well documented, detailed program that everyone ignores. Data classification is so hard because it is both Non-intuitive and instinctive. Instinctive in that we all innately classify everything we see. From people, to movies, to enterprise data, we humans are judgmental classification machines. We classify as good vs. bad, threat vs. non-threat, important vs. irrelevant. Non-intuitive because in an organization we’re asked to classify not based on our instincts, but based on policies designed by someone else. Thus the first problem with data classification isn’t because we can’t classify, it’s because we always classify. We just classify based on our instincts, not a piece of paper on a shelf. When they differ, our instincts win. The second problem with data classification is that we overlay it onto business process, rather than building it in. Classification becomes a task outside of the processes we engage in to complete our job; it’s an “add on” that slows us down, and is simple to ignore. The third problem with data classification is that we fail to provide employees with the tools to get the job done. It’s not only manual and non-intuitive, but we don’t provide the technical tools needed to even make it meaningful. Quarterly assessments in a spreadsheet aren’t very useful. The fourth problem with data classification is that it’s static. We tend to classify data at the time of creation or based on where it’s stored, but that’s never revised based on changing use and business context. Data’s sensitivity varies greatly over its lifecycle and based on how it’s being used; few data classification systems account for this. The fifth, and final, problem with data classification is that it’s usually too complicated. The classification scheme and process itself is even less intuitive than asking someone to classify against their instincts. We use terms like, “sensitive but unclassified” that have little meaning outside the world of the military/government. But that doesn’t mean all hope is lost. As I mentioned before, there are places where data classification works well, mostly because they’ve adapted it for their specific environment. The military does a good job of overcoming these obstacles- data classification is built into the culture, which redefines native instincts to include enterprise priorities. It’s baked into the process of handling information and essential to business (yes, the military is a business) processes. Technology systems are specifically designed and chosen due to their suitability to handle classified data. No, it’s not perfect, but it does work. That doesn’t mean that military classification works in private enterprise. It doesn’t. It fails. Badly. Which is unfortunate, because that’s how all the books tell you to do it. Over the next two posts I’ll suggest something I call Practical Data Classification. It’s designed to provide organizations an effective model that integrates with existing enterprise practices and culture, while still providing value. It’s not for you military or financial types that alreaady do this well; consider it data classification for the rest of us. Share:

Share:
Read Post

Product Happenings: Guardium, SafeBoot, Palo Alto, and Vontu

Despite my departure from the analyst world, thanks to the blog some of the vendors out there are still keeping me updated on their products. I also still have to track big swaths of the market to support my consulting work. While I don’t intend to this blog to just spew PR dribble, I do see some cool stuff every now and then that’s worth mentioning. Disclaimer: I do not currently have a business relationship with any of the vendors/products in today’s post, but based on the nature of my business I do work with vendors and often have discussions about potential projects. I will disclose these relationships when I can, and while I strive to remain objective no matter who I work with you should never go buy something just because I said it was cool. Do the research, get balanced opinions, trust no one. I’m not endorsing these products over their competitors, just highlighting some interesting advances, and you’ll probably see competing products pop up in other posts over time. Here are a few things that have caught my eye: First up is SafeBoot, just acquired by McAfee. Overall I think the acquisition is positive, but there’s really no reason to consolidate whole drive encryption with endpoint DLP. File-level encryption linked to DLP is more interesting, but also very challenging and I suspect at least a couple years out for McAfee other than some basic content like Social Security Numbers. It’s wait and see on this one, but SafeBoot stands up on its own. Next is Guardium, who just updated their product for the mainframe. Guardium briefed me last Friday on this and I meant to get something up earlier. This is a really smart move, especially since they partnered with Nuon Neon who sells to the mainframe buying center. They can now offer full database monitoring (including SELECT queries) on the mainframe outside of network sniffing (which misses certain kinds of connections). Why you care? Now you have an independent way to enforce separation of duties on mainframe administrators without interfering with how they work or affecting performance. And you can integrate the policies for alerts, and the logs, with all your other database monitoring. I think I was more excited about this one than the guys giving me the briefing- it’s one of these “small but big” markets. An industry contact I work with pointed me towards Palo Alto Networks and I had a brief conversation with them about a month ago. Basically, they parse and secure network traffic based on the application, not just port and protocol. This is a big problem for things like DLP solutions that don’t really like it (or work as well) when they have to figure out which application is tunneling over port 80 this week. I think these guys have a lot of partnership opportunities down the road. Last up today is Vontu, who just released version 8. The news here is increasing their endpoint capabilities to start blocking and integration with document management systems. This release isn’t notable for any new world-changing feature, but because most of the work was on the back end and increasing the capabilities of the product line. DLP is settling down a bit and focusing on maturing, rather than land-grabbing with hyped up features. I’ve had some other DLP briefings lately and I’m seeing this focus on maturing the platforms across the board; moving from start-ups to mature products is some seriously hard work. Blocking activity on the endpoint is a big deal and it’s nice to see Vontu add it (a few competitors also have their own flavor of it, so it’s not unique). That’s it for now. I probably won’t do these more than once a month or so and I’ll only include any updates that seem interesting to me either because they are innovative of because they show an industry trend. I’m happy to take briefings from just about anyone, but that by no means guarantees a mention on the blog. Now back to the absolutely thrilling world of data classification… Share:

Share:
Read Post

Practical Data Classification: Type 1, The Hasty Classification

In over thirteen years with mountain rescue and five years as a ski patroller I participated in countless search and avalanche drills, and a fair number of real incidents. Search in the real world, as in the computing world, is difficult due to the need to balance performance with thoroughness. In a rescue situation you need to find the victim as quickly as possible; a thorough search has a higher Probability of Detection (POD), but takes longer. Assuming you’re looking for a live victim this time can mean the difference between a rescue and a recovery. Since detailed searches also take time to gather resources (searchers), most searches/rescues start with what’s called a hasty. A hasty search is light and fast- you send out a smaller, faster team to scour the area for obvious clues. The probability of detection is low, but you don’t need a 50 person team with full gear to find a half-burried skier in an obvious tree well in the middle of a deposition zone (where all the snow ends up after an avalanche). I’ve been on a bunch of hasty teams in real-world searches (no avalanches) and would guess that we found the victim before the big search was launched somewhere around 20-30% of the time. A hasty is effective because it’s designed to maximize speed while finding anything obvious in critical situations. We can adapt the principle of the hasty for data classification. Many classification programs fail because they attempt to solve the entire problem while taking too long to protect the critical assets. In a hasty classification program you focus on a single critical data type and roll out classification enterprise wide. Rather than overwhelming users with a massive program, focus on one kind of data that’s clearly critical in a very focused program to protect it. It’s a baby step to protect a critical asset while slowly changing user habits. Data Classification Type 1: Hasty Classification The short version: Pick one critical type of data. I suggest credit card numbers, Social Security Numbers, or something similar. Have business units tell you where they use it and store it. Issue security policies for how that data needs to be secured. Work with units to secure the systems Security helps the business units secure the data, while audit plays the enforcement role. This makes security the good guys. Keep it updated with ongoing audits and regular “compliance” reporting of where and how data is used and stored. Same process, with more details: Design your basic classifications. I suggest no more than 3-4, and use plain English. For example, “Sensitive/Internal/Public”. If you deal with personally identifiable information (PII) that can be a separate classification, and call it PII, NPI, HIPAA, or whatever term your industry uses. Pick one type of critical data that is easy to recognize. I highly recommend PII- credit card numbers, Social Security Numbers, or something similar. Get executive approval/support- this has to come from as high as possible. If you can’t get it, and you care about security, update your resume. Beating your head against a wall is painful and only annoys the wall and anyone within earshot. Issue a memo requiring everyone to identify any business process or IT system that contains this data within 30/60/90 days. Collect results. While collecting the results, finalize security standards for how this data is to be used, stored, and secured. This includes who is allowed to access it (based on business unit/role), approved business processes (billing only, or billing/CRM, etc.), approved applications/systems (be specific), where it can be stored (specific systems and paper repositories), and any security requirements. Security requirements should be templates and standards with specific, approved configurations. Which software, which patch level, which configuration settings, how systems communicate, and so on. If you can’t do this yourself, just point to open standards like those at cisecurity.org. Issue the security standards. Require business units to bring systems into compliance within a specific time frame, or get an approved exception. IT Security works with business units to bring systems/processes into compliance. They work with the business and do not play an enforcement role. If exceptions are requested, they must figure out how to secure the data for that business need, and the business will be required to adopt needed alternative security controls for that business process. After the time period to bring systems into compliance expires, the audit group begins random audits of business units to ensure reporting accuracy and that systems are in compliance with corporate standards. Business units periodically report (rolling schedule) on any changes on use or storage of the now-classified data. Security continuously evaluates security standards, issues changes where needed, and helps business units keep the data secure. Audit plays the enforcement role of looking for exceptions. I know some of you are sitting there going, “This is the easy way? I’d hate to see the hard way!” The hasty classification is really an entire data classification program, but focused on one single kind of easily identified data. When you think about it, you’re just picking that critical data, figuring out where it is, helping secure it, and using audit to make sure you’re doing what you think you’re doing. When I discuss this with people I prefer to lay out all the steps in detail, but most of you will adapt it to suit your own environment. The key is to keep it simple, pick one data type to start, and separate between those securing the data, and those verifying that the data is secure. In our next post on this topic we’ll talk about how to grow this into a complete program. I’m even working on pretty pictures! Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.