Securosis

Research

How Market Forces Can Fix PCI

It’s no secret that I haven’t always been the biggest fan of PCI (the Payment Card Industry Data Security Standard). I believe that rather than blowing massive amounts of cash trying to lock down an inherently insecure system, we should look at building a more fundamentally secure way of performing payment transactions. Not that I think anything is ever perfectly secure, but there is a heck of a lot of room for progress, and our current focus has absolutely no chance of doing more than slightly staving off the inevitable. It’s like a turtle trying to outrun the truck that’s about to crush it- the turtle might buy itself an extra microsecond or two, but the outcome won’t change. That said, I’ve also (fairly recently, and due in no small part to debates with Martin McKeay come to believe that as flawed as PCI is, it’s also the single biggest driver of security improvements throughout the industry. It’s the primary force helping security pros gain access to executive management; even more so than any other regulation out there. And while I’d really like to see us focus more on a secure transaction ecosystem, until that wonderful day I recognize we need to live with what we have, and use it to the best of our ability. Rather than merely adding new controls to the PCI standard, I think the best way to do this is to fix some of the inherent problems with how the program is currently set up. If you’ve ever been involved with other types of auditing, PCI looks totally bass ackwards. The program itself isn’t designed to produce the most effective results. Here are a couple of changes that I think could make material improvements to PCI, possibly doubling the number of microseconds we have until we’re a steaming mass of roadkill: Eliminate conflicts of interest by forbidding assessors from offering security tools/services to their clients: This is one of the single biggest flaws in PCI- assessors may also offer security services to their clients. This is a massive conflict of interest that’s illegal in financial audits due to all the problems it creates. It will royally piss off a few of the companies involved, but this change has to happen. Decertify QSAs (assessors) that certify non-compliant companies: Although the PCI council has a few people on their probation list, despite failure after failure we haven’t seen anyone penalized. Without these penalties, and a few sacrificial lambs, QSAs are not accountable for their actions. Eliminate the practice of “shopping” for the cheapest/easiest QSA: Right now, if a company isn’t happy with their PCI assessment results, they can fire their QSA and go someplace else. Let’s steal a rule from the financial auditing playbook (not that that system is perfect) and force companies to disclose when they change assessors, and why. There are other actions we could take, such is publicly disclosing all certified companies on a quarterly basis or requiring unscheduled spot-checks, but I don’t think we necessarily need those if we introduce more accountability into the system, and reduce conflicts of interest. Another very interesting development comes via @treyford on twitter, pointing to a blog post by James DeLuccia and an article over at Wired. It seems that the auditor for CardSystems (you know, one of the first big breaches from back in 2005) is being sued for improperly certifying the company. If the courts step in and establish a negligence standard for security audits, the world might suddenly become mighty interesting. But as interesting (and hopefully motivating, at least to the auditors/assessors) as this is, I think we should focus on what we can do with PCI today to allow market forces to drive security and program improvements. Share:

Share:
Read Post

Macworld Security Article Up- The Truth About Apple Security

Right when the Macalope was sending along his take on the recent ComputerWorld editorial calling for the FTC to investigate Apple, Macworld asked me to write a more somber take. Here’s an excerpt: On May 26, Macworld republished a controversial Computerworld article by Ira Winkler suggesting that Apple is “grossly negligent” when it comes to security, and should be investigated by the Federal Trade Commission for false advertising. The author was motivated to write this piece based on Apple’s recent failure to patch a known Java security flaw that was fixed on other platforms nearly six months ago. While the article raises some legitimate issues, it’s filled with hyperbole, inaccurate interpretations, and reaches the wrong conclusions. Here’s what you really need to know about the Java situation, Mac security in general, and the important lesson on how we control Apple’s approach to security. … The real failure of this, and many other, calls for Mac security is that they fail to accurately identify those who are really responsible for Apple’s current security situation. It isn’t security researchers, malicious attackers, or even Apple itself, but Apple’s customers. Apple is an incredibly successful company because it produces products that people purchase. We still buy MacBooks despite the lack of a matte screen, for example. And until we tell Apple that security will affect our buying decisions, there’s little motivation for the company to change direction. Think of it from Apple’s perspective—Macs may be inherently less secure, but they are safer than the competition in the real world, and users aren’t reducing what they spend on Apple because of security problems. There is reasonable coverage of Mac security issues in the mainstream press (Mr. Winkler’s claim to the contrary), but without demonstrable losses it has yet to affect consumer behavior. Don’t worry- I rip into Apple for their totally irresponsible handling of the Java flaw, but there really isn’t much motivation for Apple to make any major changes to how they handle things, as bad as they often are. Share:

Share:
Read Post

The State of Web Application and Data Security—Mid 2009

One of the more difficult aspects of the analyst gig is sorting through all the information you get, and isolating out any inherent biases. The kinds of inquiries we get from clients can all too easily skew our perceptions of the industry, since people tend to come to us for specific reasons, and those reasons don’t necessarily represent the mean of the industry. Aside from all the vendor updates (and customer references), our end user conversations usually involve helping someone with a specific problem – ranging from vendor selection, to basic technology education, to strategy development/problem solving. People call us when they need help, not when things are running well, so it’s all too easy to assume a particular technology is being used more widely than it really is, or a problem is bigger or smaller than it really is, because everyone calling us is asking about it. Countering this takes a lot of outreach to find out what people are really doing even when they aren’t calling us. Over the past few weeks I’ve had a series of opportunities to work with end users outside the context of normal inbound inquiries, and it’s been fairly enlightening. These included direct client calls, executive roundtables such as one I participated in recently with IANS (with a mix from Fortune 50 to mid-size enterprises), and some outreach on our part. They reinforced some of what we’ve been thinking, while breaking other assumptions. I thought it would be good to compile these together into a “state of the industry” summary. Since I spend most of my time focused on web application and data security, I’ll only cover those areas: When it comes to web application and data security, if there isn’t a compliance requirement, there isn’t budget – Nearly all of the security professionals we’ve spoken with recognize the importance of web application and data security, but they consistently tell us that unless there is a compliance requirement it’s very difficult for them to get budget. That’s not to say it’s impossible, but non-compliance projects (however important) are way down the priority list in most organizations. In a room of a dozen high-level security managers of (mostly) large enterprises, they all reinforced that compliance drove nearly all of their new projects, and there was little support for non-compliance-related web application or data security initiatives. I doubt this surprises any of you. “Compliance” may mean more than compliance – Activities that are positioned as helping with compliance, even if they aren’t a direct requirement, are more likely to gain funding. This is especially true for projects that could reduce compliance costs. They will have a longer approval cycle, often 9 months or so, compared to the 3-6 months for directly-required compliance activities. Initiatives directly tied to limiting potential data breach notifications are the most cited driver. Two technology examples are full disk encryption and portable device control. PCI is the single biggest compliance driver for web application and data security – I may not be thrilled with PCI, but it’s driving more web application and data security improvements than anything else. The term Data Loss Prevention has lost meaning – I discussed this in a post last week. Even those who have gone through a DLP tool selection process often use the term to encompass more than the narrow definition we prefer. It’s easier to get resources to do some things manually than to buy a tool – Although tools would be much more efficient and effective for some projects, in terms of costs and results, manual projects using existing resources are easier to get approval for. As one manager put it, “I already have the bodies, and I won’t get any more money for new tools.” The most common example cited was content discovery (we’ll talk more about this a few points down). Most people use DLP for network (primarily email) monitoring, not content discovery or endpoint protection – Even though we tend to think discovery offers equal or greater value, most organizations with DLP use it for network monitoring. Interest in content discovery, especially DLP-based, is high, but resources are hard to get for discovery projects – Most security managers I talk with are very interested in content discovery, but they are less educated on the options and don’t have the resources. They tell me that finding the data is the easy part – getting resources to do anything about it is the limiting factor. The Web Application Firewall (WAF) market and Security Source Code Tools markets are nearly equal in size, with more clients on WAFs, and more money spent on source code tools per client – While it’s hard to fully quantify, we think the source code tools cost more per implementation, but WAFs are in slightly wider use. WAFs are a quicker hit for PCI compliance – Most organizations deploying WAFs do so for PCI compliance, and they’re seen as a quicker fix than secure source code projects. Most WAF deployments are out of band, and false positives are a major problem for default deployments – Customers are installing WAFs for compliance, but are generally unable to deploy them inline (initially) due to the tuning requirements. Full drive encryption is mature, and well deployed in the early mainstream – Full drive encryption, while not perfect, is deployable in even large enterprises. It’s now considered a level-setting best practice in financial services, and usage is growing in healthcare and insurance. Other asset recovery options, such as remote data destruction and phone home applications, are now seen as little more than snake oil. As one CISO told us, “I don’t care about the laptop, we just encrypt it and don’t worry about it when it goes missing”. File and folder encryption is not in wide use – Very few organizations are performing any wide scale file/folder encryption, outside of some targeted encryption of PII for compliance requirements. Database encryption is hard, and not widely used – Most organizations are dissatisfied with

Share:
Read Post

The CIS Consensus Metrics and Project Quant

Just before release, the Center for Internet Security sent us a preview copy of the CIS Consensus Metrics. I’m a longtime fan of the Center, and, once I heard they were starting on this project, was looking forward to the results. Overall I think they did a solid job on a difficult problem. Is it perfect? Is it complete? No, but it’s a heck of a good start. There are a couple things that stand out: They do a great job of interconnecting different metrics, and showing you how you can leverage a single collected data attribute across multiple higher-level metrics. For example, a single “technology” (product/version) is used in multiple places, for multiple metrics. It’s clear they’ve designed this to support a high degree of automation across multiple workflows, supporting technologies, and operational teams. I like how they break out data attributes from security metrics. Attributes are the feeder data sets we use to create the security metrics. I’ve seen other systems that intermix the data with the metrics, creating confusion. Their selected metrics are a reasonable starting point for characterizing a security program. They don’t cover everything, but that makes it more likely you can collect them in the first place. They make it clear this is a start, with more metrics coming down the road. The metrics are broken out by business function – this version covering incident management, vulnerability management, patch management, application security, configuration management, and financial. The metric descriptions are clear and concise, and show the logic behind them. This makes it easy to build your own moving forward. There are a few things that could also be improved: The data attributes are exhaustive. Without automated tool support, they will be very difficult to collect. The document suggests prioritization, but doesn’t provide any guidance. A companion paper would be nice. This isn’t a mind-bending document, and we’ve seen many of these metrics before, but not usually organized together, freely available, well documented, or from a respected third party. I highly recommend you go get a copy. Now on to the CIS Consensus Metrics and Project Quant… I’ve had some people asking me if Quant is dead thanks to the CIS metrics. While there’s the tiniest bit of overlap, the two projects have different goals, and are totally complementary. The CIS metrics are focused on providing an overview for an entire security program, while Quant is focused on building a detailed operational metrics model for patch management. In terms of value, this should provide: Detailed costs associated with each step of a patch management process, and a model to predict costs associated with operational changes. Measurements of operational efficiency at each step of patch management to identify bottlenecks/inefficiencies and improve the process. Overall efficiency metrics for the entire patch management process. CIS and Quant overlap for the last goal, but not for the first two. If anything, Quant will be able to feed the CIS metrics. The CIS metrics for patch management include: Patch Policy Compliance Patch Management Coverage Mean Time to Patch I highly suspect all of these will appear in Quant, but we plan on digging into much greater depth to help the operational folks directly measure and optimize their processes. Share:

Share:
Read Post

Is the Term “DLP” Finally Meaningless?

As most of you know, I’ve been covering DLP for entirely too long. It’s a major area of our research, with an entire section of our site dedicated to it. To be honest, I never really liked the term “Data Loss Prevention”. When this category first appeared, I used the term Content Monitoring and Filtering. The vendors didn’t like it, but since I wrote (with a colleague) the Gartner Magic Quadrant, they sort of rolled with it. The vendors preferred DLP since it sounded better for marketing purposes (I have to admit, it’s sexier than CMF). Once market momentum took over and end users started using DLP more than CMF, I rolled with it and followed the group consensus. I never liked Data Loss Prevention since, in my mind, it could mean pretty much anything that “prevents data loss”. Which is, for the most part, any security tool on the market. My choice was to either jump on the DLP bandwagon, or stick to my guns and use CMF, even though no one would know what I was talking about. Thus I transitioned over, started using DLP, and focused my efforts on providing clear definitions and advice related to the technology. Over the past 2 weeks I’ve come to realize that DLP, as a term for a specific category of technology, is pretty much dead. I’ve been invited to multiple DLP conferences/speaking opportunities, none of which are focused on what I’d consider DLP tools. I’ve been asked to help work on DLP training materials that don’t even have a chapter on DLP tools. I’ve had multiple end-user conversations on DLP… almost always referring to a different technology. The DLP vendors did such a good job of coming up with a sexy name for their technology that the rest of the world decided to use it… even when they had nothing to do with DLP. Thus, any vendor reading this can consider this post my official recommendation that you drop the term DLP, and move to Content Monitoring and Protection (CMP – a term Chris Hoff first suggested that I’ve glommed onto). Or just make something else up. I’ll continue using DLP on this site, but the non-DLP vendors have won and the term is completely diluted and no longer refers to a specific technology. Thus I’ll stop being incredibly anal about it, and you might see me associated with “DLP” when it has nothing to do with pure-play DLP as I’ve historically defined it. That said, when I’m writing about it I still intend to use the term DLP in my personal writing in accordance with my very specific definition (below), and will start using ‘CMP’ more heavily. Data Loss Prevention/Content Monitoring and Protection is: Products that, based on central policies, identify, monitor, and protect data at rest, in motion, and in use through deep content analysis. For the record, I get all uppity about mangled definitions because all too often they’re used to create market confusion, and reduce value to users. People end up buying things that don’t do what they expected. Share:

Share:
Read Post

Friday Summary – May 22, 2009

Adrian has been out sick with the flu all week. He claims it’s just the normal flu, but I swear he caught it from those bacon bits I saw him putting on his salad the other day. Either that, or he’s still recovering from last week’s Buffett outing. He also conveniently timed his recovery with his wife’s birthday, which I consider to be entirely too suspicious for mere coincidence. While Adrian was out, we passed a couple milestones with Project Quant. I think we’ve finally nailed a reasonable start to defining a patch management process, and I’ve drafted up a sample of our first survey. We could use some feedback on both of these if you have the time. Next week will be dedicated to breaking out all the patch management phases and mapping out specific sub-processes. Once we have those, we can start defining the individual metrics. I’ve taken a preliminary look at the Center for Internet Security’s Consensus Metrics, and I don’t see any conflicts (or too much overlap), which is nice. When we look at security metrics we see that most fall into two broad categories. On one side are the fluffy (and thus crappy) risk/threat metrics we spend a lot of time debunking on this site. They are typically designed to feed into some sort of ROI model, and don’t really have much to do with getting your job done. I’m not calling all risk/threat work crap, just the ones that like to put a pretty summary number at the end, usually with a dollar sign, but without any strong mathematical basis. On the other side are broad metrics like the Consensus Metrics, designed to give you a good snapshot view of the overall management of your security program. These aren’t bad, are often quite useful when used properly, and can give you a view of how you are doing at the macro level. The one area where we haven’t seen a lot of work in the security community is around operational metrics. These are deep dive, granular models, to measure operational efficiency in specific areas to help improve associated processes. That’s what we’re trying to do with Quant – take one area of security, and build out metrics at a detailed enough level that they don’t just give you a high level overview, but help identify specific bottlenecks and inefficiencies. These kinds of metrics are far too detailed to achieve the high-level goals of programs like the Consensus Metrics, but are far more effective at benchmarking and improving the processes they cover. In my ideal world we would have a series of detailed metrics like Quant, feeding into overview models like the Consensus Metrics. We’ll have our broad program benchmarks, as well as detailed models for individual operational areas. My personal goal is to use Quant to really nail one area of operational efficiency, then grow out into neighboring processes, each with its own model, until we map out as many areas as possible. Pick a spot, perfect it, move on. And now for the week in review: Webcasts, Podcasts, Outside Writing, and Conferences Martin and I cover a diverse collection of stories in Episode 151 of the Network Security Podcast I wrote up the OS X Java vulnerability for TidBITS. I was quoted at MacNewsWorld on the same issue. Another quote, this time in EWeek on “data for ransom” schemes. Dark Reading covered Project Quant in its post on the Center for Internet Security’s Consensus Metrics Favorite Securosis Posts Rich: The Pragmatic Data (Information-Centric) Security Cycle. I’ve been doing a lot of thinking on more practical approaches to security in general, and this is one of the first outcomes. Adrian: I’ve been feeling foul all week, and thus am going with the lighter side of security – I Heart Creative Spam. Favorite Outside Posts Adrian: Yes, Brownie himself is now considered a cybersecurity expert. Or not.. Rich: Johnny Long, founder of Hackers for Charity, is taking a year off to help the impoverished in Africa. He’s quit his job, and no one is paying for this. We just made a donation, and you should consider giving if you can. Top News and Posts Good details on the IIS WebDAV vulnerability by Thierry Zoller. Hoff on the cloud and the Google outage. Imperva points us to highlights on practical recommendations from the FBI and Secret Service on reducing financial cybercrime. Oops – the National Archives lost a drive with sensitive information from the Clinton administration. As usual, lax controls were the cause. Some solid advice on controlling yourself when you really want that tasty security job. You know, before you totally piss off the hiring manager. We bet you didn’t know that Google Chrome was vulnerable to the exact same vulnerability as Safari in the Pwn2Own contest. That’s because they both use WebKit. Adobe launches a Reader and Acrobat security initiative. New incident response, patch cycles, and secure development efforts. This is hopefully Adobe’s equivalent to the Trustworthy Computing Initiative. Blog Comment of the Week This week’s best comment was by Jim Heitela in response to Security Requirements for Electronic Medical Records: Good suggestions. The other industry movement that really will amplify the need for healthcare organizations to get their security right is regional/national healthcare networks. A big portion of the healthcare IT $ in the Recovery Act are going towards establishing these networks, where the security of EPHI will only be as good as the weakest accessing node. Establishing adequate standards for partners in these networks will be pretty key. And, also thanks to changes that were started as a part of the Recovery Act, healthcare organizations are now being required to actually assess 3rd party risk for business associates, versus just getting them to sign a business associate agreement. Presumably this would be anyone in a RHIO/RHIN. Share:

Share:
Read Post

NAC Isn’t About User Authentication

I was reading a NAC post by Alan Shimel (gee, what a shock), and it brought up one of my pet peeves about NAC. Now I will fully admit that NAC isn’t an area I spend nearly as much time on as data and application security, but I still consider it one of our more fundamental security technologies that’s gotten a bad rap for the wrong reasons, and will eventually be widely deployed. The last time I talked about NAC in detail I focused on why it came to exist in the first place. Basically, we had no way to control what systems were connecting to our network, or monitor/verify the health of those systems. We, of course, also want to control which users end up on our network, and there’s been growing recognition for many years now that we need to do that lower on the OSI stack to protect ourselves from various kinds of attacks. Here’s how I’ve always seen it: We use 802.1x to authenticate which users we want to allow to connect to our network. We use NAC to decide which systems we want to allow to connect to our network. I realize 802.1x is often ‘confused’ with NAC, but it’s a separate technology that happens to complement NAC. Alan puts it well: Authentication is where we screwed up. Who said NAC was about authentication? Listening yesterday you would think that 802.1x authentication was a direct result of NAC needing a secure authentication process. Guys lets not put the cart in front of the horse. 802.1x offers a lot of other features and advantages besides NAC authentication. In fact it is the other way around. NAC vendors adopted 802.1x because it offered some distinct advantages. It was widespread in wireless networks. However, JJ is right. It is complex. There are a lot of moving parts. If you have not done everything right to implement 802.1x on your network, don’t bother trying to use it for NAC. But if you had, it does work like a charm. As I have said before it is not for the faint of heart. Hopefully JJ and Alan won’t take too much umbrage from this post, but when looking at NAC I suggest to keeping your goals in mind, as well as an understanding of NAC’s relationship with 802.1x. The two are not the same thing, and you can implement either without the other. Share:

Share:
Read Post

I Heart Creative Spam

I hate to admit it, but I often delight in the sometimes brilliant creativity of those greedy assholes trying to sell me various products to improve the functioning of my rod or financial portfolio. I used to call this “spam haiku” and kept a running file to entertain audiences during presentations. Lately I’ve noticed some improvements in the general quality of this digital detritus, at least on the top end. While the bulk of spam lacks even the creativity of My Pet Goat, and targets a similar demographic, the best almost contain a self awareness and internal irony more reminiscent of fine satire. Even messages that seem unintelligible on the surface make a wacky kind of poetry when viewed from a distance. Here are a few, all collected within the past few days: Make two days nailing marathon semipellucid pigeonhearted (Semipellucid should be added to a dictionary someplace.) Girls will drop underwear for you banyan speechmaker (Invokes images of steamy romance in the tropics… assuming you aren’t afraid of talking penises.) How too Satisfy a Woman in Bed – Part 1 (No poetry, but simple and to the point (ignoring the totally unnecessary-for-filter-evasion spelling error. I’m still waiting anxiously for Part 2, since Part 1 failed to provide details on what to do after taking the blue pill. Do I simply wait? Am I supposed to engage in small talk? When do we actually move to the bed? Is a lounge chair acceptable, or do I have to pay extra for that? Part 1 is little more than a teaser, I think I should buy the full series.) Read it, you freak (Shows excellent demographic research!) When the darkness comes your watch will still show you the right time (This is purely anti-Semitic. I realize us Jews will be left in the darkness after the Rapture, but there’s no reason to flaunt it. At least my watch will work.) Your virility will never disappear as long as you remain with us (Comforting, but this was the header of an AARP newsletter.) Shove your giant and give her real tension. (Is it me, or does this conjure images of battling a big ass biker as “she” nervously bites her nails in anticipation of your impending demise?) You can look trendy as a real dandy. (Er..) Real men don’t check the clock, they check the watch. (Damn straight! And they shove giants. Can’t forget the giants.) Your rocket will fly higher aiguille campanulate runes relapse Get a watch that was sent you from heaven above. (Well, if it’s from heaven, I can’t say no.) Empower your fleshy thing (Excellent. Its incubation in the lab is nearly complete, and I’ve been searching for a suitable power source to support its mission of world domination.) Your male stamina will return to you like a boomerang. (It will go flying off to the far corner of the park where my neighbor’s dog shreds it to pieces? Perhaps evoking the wrong image here.) Your wang will reach ceiling (I do have a vintage Wang in my historical computer collection. Is this a robotic arm or some sort of ceiling mount? I must find out. If it’s in reference to my friend’s cousin Wang, I’m not sure I’d call him “mine”, and he already owns a ladder.) Your stiff wang = her moans (Wang isn’t dead, but I’m sure his wife would moan in agony at her loss if he was. What’s with the obsession with my friend’s cousin?) Be more than a man with a Submariner SS watch. (Like… a cyborg?!?!) Your account has been disabled (I guess we’re done then.) Share:

Share:
Read Post

The Network Security Podcast, Episode 151

We probably more the doubled the number of stories we talked about this week, but we only added about 8 minutes to the length of the podcast. You can consider this the “death by a thousand cuts” podcasts as we cover a string of shorter stories, ranging from a major IIS vulnerability, through breathalyzer spaghetti code, to how to get started in security. We also spend a bit of time talking about Black Hat and Defcon, and celebrate hitting 500,000 downloads on episode 150. Someone call a numerologist! Network Security Podcast, Episode 151, May 19, 2009 Show Notes: Breathalyzer source code released as part of a DUI defense… and it’s a mess. A DHS system was hacked, but only a little information made it out. Secret questions for password resets are often weaker than passwords, and easy to guess. Does tokenization solve anything? Yep. Kaspersky finds malware installed on a brand new netbook. Malware inserts malicious links into Google searches. Google Chrome was vulnerable to Safari Pwn2Own bug. Both are WebKit-based, so we shouldn’t be too surprised. Information on the IIS 6 vulnerability/0day. How to get started in information security by Paul Asadoorian. Tonight’s Music: Liberate Your Mind by The Ginger Ninjas Share:

Share:
Read Post

The Pragmatic Data (Information-Centric) Security Cycle

Way back when I started Securosis, I came up with something called the Data Security Lifecycle, which I later renamed the Information-Centric Security Cycle. While I think it does a good job of capturing all the components of data security, it’s also somewhat dense. That lifecycle was designed to be a comprehensive outline of protective controls and information management, but I’ve since realized that if you have a specific data security problem, it isn’t the best place to start. In a couple weeks I’ll be speaking at the TechTarget Financial Information Security Decisions conference in New York, where I’m presenting Pragmatic Data Security. By “pragmatic” I mean something you can implement as soon as you get home. Where the lifecycle answers the question, “How can I secure all my data throughout its entire lifecycle?” pragmatic data security answers, “How can I protect this specific data at this point in time, in my existing environment?” It starts with a slimmed down cycle: Define what information you want to protect (specifically, not general data classification) Discover where it’s located (various tools/techniques, preferably automated, like DLP, rather than manual) Secure the data where it’s stored, and/or eliminate data where it shouldn’t be (access controls, encryption) Monitor data usage (various tools, including DLP, DAM, logs, SIEM) Protect the data from exfiltration (DLP, USB control, email security, web gateways, etc.) For example, if you want to protect credit card numbers you’d define them in step 1, use DLP content discovery in step 2 to locate where they are stored, remove it or lock the repositories down in step 3, use DAM and DLP to monitor where they’re going in step 4, and use blocking technologies to keep them from leaving the organization in step 5. All too often I’m seeing people get totally wrapped up in complex “boil the ocean” projects that never go anywhere, vs. defining and solving a specific problem. You don’t need to start your entire data security program with some massive data classification program. Pick one defined type of data/information, and just go protect it. Find it, lock it down, watch how it’s being used, and stop it from going where you don’t want. Yeah, parts are hard, but hard != impossible. If you keep your focus, any hard problem is just a series of smaller, defined steps. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.