Securosis

Research

Cash, Coke & Stuxnet: an Alternative Perspective

Now that the media has feasted on the Stuxnet carcass, it gives me a moment of pause. What of a different perspective? I know – madness, right? But seriously, we have seen the media in a lather over this story for some time now. Let’s be honest – to someone who has worked in the SCADA community, this really is nothing new. It’s just one incident that happened to come to light. An alternative angle to the story, which seems to have been shied away from, is under-financed but motivated agents. Technical ‘resources’ with too much free time and a wealth of knowledge. This is not a new idea – just look at the abundance of open source projects that rely heavily on this concept: smart people with free time on their hands. What happens when you combine a surfeit of technical competence with a criminal bent? This was well documented back in the 80’s, when a group of German hackers led by Karl Koch were arrested for selling source code they had purloined from US government and corporate computers to the KGB. In this case these hackers were receiving payments in the form of cocaine and cash. Nothing major, just enough to keep them happy (and awake during their coke-fueled coding sessions). At least that was the idea until they were caught and Karl met his untimely end in a German forest in 1989. The argument will invariably be: how could they have the knowledge required for some of these attacks? Ever worked for a power company? There are usually a good number of disgruntled workers and $1,000 US will go a long way in some countries. It was also not difficult to gain access to the documentation from most control system vendors until recently. To borrow from Rich Mogull: funding = resources – the biggest of which are time and knowledge. Looking back to my earlier statement, this is something that a lot of disaffected hackers in former eastern bloc countries have in droves. Throw in some cash and drugs and you could have a motivated crew. I don’t think this is the case, but you must admit it’s within the realm of possibility. After all, this is not without precedent. There’s a skeleton in a forest someplace to prove it. Share:

Share:
Read Post

Counterpoint: Availability Is Job #1

Rich makes the case that A Is Not for Availability in this week’s FireStarter. Basically his thinking is that the A in the CIA triad needs to be attribution, rather than availability. At least when thinking about security information (as opposed to infrastructure). Turns out that was a rather controversial position within the Securosis band. Yes, that’s right, we don’t always agree with each other. Some research firms gloss over these disagreements, forcing a measure of consensus, and then force every analyst to toe the line. Lord knows, you can never disagree in front of a client. Never. Well, Securosis is not your grandpappy’s research firm. Not only do we disagree with each other, but we call each other out, usually in a fairly public manner. Rich is not wrong that attribution is important – whether discussing information or infrastructure security. Knowing who is doing what is critical. We’ve done a ton of research about the importance of integrating identity information into your security program, and will continue. Especially now that Gunnar is around to teach us what we don’t know. But some of us are not ready to give up the ghost on availability. Not just yet, anyway. One of the core tenets of the Pragmatic CSO philosophy is a concept I called the Reasons to Secure. There are five, and #1 is Maintain Business System Availability. You see, if key business systems go down, you are out of business. Period. If it’s a security breach that took the systems down, you might as well dust off your resume – you’ll need it sooner rather than later. Again, I’m not going to dispute the importance of attribution, especially as data continues to spread to the four corners of the world and we continue to lose control of it. But not to the exclusion of availability as a core consideration for every decision we make. And I’m not alone in challenging this contention. James Arlen, one of our Canadian Wonder Twins, sent this succinct response to our internal mailing list this AM: As someone who is often found ranting that availability has to be the first member of the CIA triad instead of the last, I’m not sure that I can just walk away from it. I’m going to have to have some kind of support, perhaps a process to get from hugging availability to thinking about the problem more holistically. Is this ultimately about the maturation of the average CIO from superannuated VP of IT to a real information manager who is capable of paying attention to all the elements of attribution (as you so eloquently describe) and beginning the process of folding in the kind of information risk management that the CISOs have been carrying while the CIO plays with blinky lights? James makes an interesting point here, and it’s clearly something that is echoed in the P-CSO: the importance of thinking in business terms, which means it’s about ensuring everything is brought back to business impact. The concept of information risk management is still pretty nebulous, but ultimately any decision we make to restrict access or bolster defenses needs to be based on the economic impact on the business. So maybe the CIA acronym becomes CIA^2, so now you have availability and attribution as key aspects of security. But at least some of us believe you neglect availability at your peril. I’m pretty sure the CEO is a lot more interested in whether the systems that drive the business are running than who is doing what. At least at the highest level. Share:

Share:
Read Post

Criminal Key Management Fail

Lin Mun Poo of Malaysia sounds like a pretty bad-ass criminal hacker. He cracked into the Federal Reserve, and snagged hundreds of thousands of card numbers from a bank in Cleveland. But perhaps his intellectual skills don’t extend quite as far as they should for criminal survival. The article describes how he was nabbed selling card numbers in Brooklyn a few hours after landing at Kennedy airport. If you’re a conspiracy nut, the following sentence might indicate the government has some secret master key to crack your encryption: The stolen card numbers were found on his encrypted laptop after he was nabbed… In our internal chat room, Dave Lewis thinks this was all a sting, and his computer was probably unlocked as he was showing off the numbers. Considering how fast they nabbed him, that’s my guess too. You sort of have to wonder why he came to the US in the first place, considering it’s easy to sell that stuff in underground markets, also supporting the sting theory. But there’s one more interesting bit: Poo has also confessed to breaking into networks of several international banks and a major Defense contractor, the complaint states. Gee, I wonder when we’ll see those disclosures go out? Yeah, probably not. Share:

Share:
Read Post

No More Flat Networks

As I continue working through the nuances of my 2011 research agenda, I’ve been throwing trial balloons at anyone and everyone I can. I posted an initial concept I called Vaults within Vaults and got some decent feedback. At this point, I’ve got a working concept for the philosophies we’ll need to embrace to stand a chance moving forward. As the Vaults concept describes, we need to segment our networks to provide some roadblocks to prevent unfettered access to our most sensitive information. The importance of this is highlighted in PCI, which means none of this is novel – it’s something you should be doing now. Stuxnet was a big wake-up call for a lot of folks in security, and not just organizations protecting Siemens control systems. The attack vectors shown really represent where malware is going. Multiple attack paths. Persistence. Lightning fast propagation using a variety of techniques. Multiple zero day attacks. And using traditional operating systems to get presence and then pivoting to attack the real target. Now that the map has been drawn by some very smart (and very well funded) attackers, we’ll see these same techniques employed en masse by many less sophisticated attackers. So what are the keys to stopping this kind of next generation attack code? OK, the first is prayer. If you believe in a higher power, pray that the bad guys are smitten and turned into pillars of salt or something. Wouldn’t that be cool? But in reality waiting for the gods to strike down your adversaries usually isn’t a winning battlefield strategy. Failing that, you need to make it harder for the attackers to get at your information. So I liked this article on the Tofino blog. It makes a lot of points we’ve been discussing about for a while within the context of Stuxnet. Flat networks are bad. Segmented networks are good. Discover and classify your sensitive data, define proper security zones to segregate data, and only then design the network architecture to provide adequate segmentation. I’ll be talking a lot more about these topics in 2011. But in the meantime, start thinking about how and where you can/should be adding more segments to your network architecture. Share:

Share:
Read Post

Friday Summary: November 19, 2010

I got distracted by email. The Friday Summary was going to be about columnar databases. I think. Maybe it’s the flu I have had all week, or my memory is going, or just perhaps the subject was not all that interesting to begin with. But the email that distracted me was kind of funny and kinda sad. A former friend and co-worker contacted me for the first time is something like 10 years. Out of the blue. The gist of the email was he was being harassed by someone with threatening emails. After a while he started to worry and wondered if the mystery harasser was serious. So he contacted the police and forwarded the information to the FBI. No response. Met with the police and they have no interest in further investigation unless there is something more substantive. You know, like a chalk outline. In frustration he reached out to me to see if he could discover the sender. Now I am not exactly a forensics expert, but I can read email headers and run reverse DNS lookups and whois. And in about three minutes I walked this person through the email header and showed the originating accounts, domains, and servers. Easy. Now I must assume that if you know about email header information and don’t want to be traced, with a little effort you could easily cover your tracks. Temp Gmail or Yahoo accounts? Use cloud or hijacked servers, or even the public library computer can hide your tracks? No? How about using your freakin’ Blackberry with your real email account, but just changing the user name? Yeah, that’s the ticket! I am occasionally happy that there are stupid people on the planet. Oh, and since you asked for it (and you know who you are), here’s the Monkey Dance: (-shuffle-shuffle-spin-shuffle-backflip). The video is too embarrassing to post. Yeah, you can make us dance for a .99 cent Kindle subscription. You ought to see what we do for an $8k retainer! On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Someone seems to think we’re one of the top 5 security influencers. Rich thinks Rothman must have paid them. Rich’s presentation at the Cloud Security Congress mentioned in this SearchSecurity article. Adrian’s comments on a database security survey. Favorite Securosis Posts Mike Rothman: Datum Entanglement. Rich’s big thoughts on where information-centric security needs to go. At least the start of those big thoughts… Rich: Rethinking Security. Adrian Lane: Datum Entanglement. Geek out! Le Geek, C’est Chic. Other Securosis Posts Incite 11/17/2010: Hitting for Average. What You Need to Know about DLP for PCI 2.0. React Faster and Better: Mop up, Analyze, and QA. Favorite Outside Posts Mike Rothman: 2011: The Death of Security As We Know IT or Operationalizing Security. From Amrit: “Security must be operationalized, it must become part of the lifecycle of everything IT.” Yeah, man. Rich: Brian Krebs on the foolishness of counting vulnerabilities. Adrian Lane: Amrit’s Operationalizing Security. Because, in its current position, security can only say “No”. Gunnar Peterson: Challenge of Sandboxing by Scott Stender. Project Quant Posts NSO Quant: Index of Posts. NSO Quant: Health Metrics – Device Health. NSO Quant: Manage Metrics – Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics – Deploy and Audit/Validate. Research Reports and Presentations The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Network Security Operations Quant Metrics Model. Network Security Operations Quant Report. Top News and Posts Adobe Releases Reader X with Sandbox. FreeBSD Sendmail Problem; update: The Problem Is with Gmail. Lawmakers take away TSA’s fringe benefits. Drive-by Downloads Still Running Wild Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Ian Krieger, in response to Datum Entanglement. Whilst it is a really stupidly-complex [sic] introduction it gets you in the right frame of mind, that is the complexities in securing data (yes I’m talking the plural here) when you have the ability to copy, or extract, it. Looking forward to the next pieces and see where your presentation goes. Share:

Share:
Read Post

Incite 11/17/2010: Hitting for Average

We all need some way to measure ourselves. Are we doing better? Worse? Are we winning or losing? What game are we playing again? It’s all about this mentality of needing to beat the average. I hate it. What is average anyway? We took the kids in for their well checkups over the past week. XX1 is average. Hovering around 50% in height and weight. XX2 is pretty close to average as well. But the Boy is small. Relative to what? Other kids just turning 7? Why do I care again? Will the girlies not dig him if he’s not average? We see the same crap in our jobs. Everyone loves a benchmark, so they can spin the numbers to make themselves look good. In security we have very few quantitative ways to measure ourselves, so not many know if they are, in fact, average. Personally I don’t care if I’m average. I don’t care if I’m exceptional because I don’t know what that means. I did well on standardized tests growing up, but what did that prove? That I could take a test? Am I better now because I was above the arbitrary average then? Will that help me fight a bear? Right, probably not. I’d rather we all focus on learning what we need to. I don’t know what that means either, but it seems like a better goal than trying to beat the average. You see, I need to learn patience. So I guess I can’t be above average all the time because I’ve got to get comfortable waiting for whatever it is I’m waiting for. Which is maybe to be above average in something. Anything. So what do you tell your kids? It’s a tough world out there and beating the average means something to most people. They’ll compete with people their entire lives. As long as they choose to play that game, that is. I tell them to do their best. Whatever that means. That goes for you too. Even if your best is below the arbitrary average, as long as you know you did your best, it’s OK. Regardless of what anyone else says. Now a corollary to that is the scourge of delusion. You really need to do your best. Far too many folks accept mediocrity because they fool themselves into thinking they did try hard. I’m not talking about that. Only you know if you really tried or whether you mailed it in. And learn from every experience. That will allow you to do a little more or better the next time. Sure it’s scary and squishy to stop competing and let go over the scorecard. But if you are constantly grumpy and disappointed in yourself and everyone around you, maybe give it a try. You’ve got nothing to lose, except perhaps that perforated ulcer. Photo credits: “Not Your Average Joe’s” originally uploaded by bon_here Incite 4 U Rich is playing in the clouds (at the Cloud Security Summit) this week, he’s MIA. I’m sure he’s holding bar court in Orlando, debating the merits of the uncertainty principle and whether Arrogant Bastard Ale was really named after him. Holy backwards looking indicators, Batman! – It must be that time of year, where Symantec (formerly PGP) pays Larry Ponemon lots of shekels to run a survey telling us how encryption use is skyrocketing. Ah, thanks, Captain Obvious. Evidently 84% of nearly 1,000 companies are using some form of encryption. Wonder if they counted SSL? 62% use file server crypto, 59% full disk encryption, and 57% use database encryption. The numbers are the numbers, but that seems low for FDE use and high for DBMS encryption. But most interestingly, nearly 70% said compliance was the main driver for crypto deployment. That was the first time compliance was the main driver? Really? Not sure what planet the respondents of previous surveys inhabitat, but on Planet Securosis compliance has been driving crypto since, well, since Top Secret ruled the world. You think companies actually want to be secure? Come on now, that’s ridiculous. It isn’t until the audit deficiency is documented that there is any urgency for crypto. Or you lose a laptop and then your CEO has to fall on his/her disclosure sword. Wonder if that was one of the choices… – MR More secure, or passing the security buck? – Banking applications on cell phones seem to be a hit with customers. This type of service really makes sense for banks as it greatly reduces their customer service costs, and allows the bank to provide more easy-to-use services to the customer, enhancing their impression of the bank. Are you worried about security? From the customer’s standpoint, the security of their account(s) is probably better in the short term, if for no other reason mobile phone-based attacks are not as prevalent as web-based attacks. But from the bank’s perspective, this is a big win! All they need to do is worry about the security of their app. The cell providers and the phone platform providers inherit the rest of the burden! In the event a compromise happens, now there are three possible parties who could be responsible, any of which can accuse the other players of failing to do their job on security. In the confusion the customer will be left holding the (empty) bag. It will be interesting to see how this shakes out, as you know black hats are looking into War Driving, the cellular version. – AL We aren’t in the excuses business, Mr. Non-SSL web site – I’m not a big fan of excuses, just ask my kids. So it’s infuriating to see apologists still out there trying to rationalize why a lot of websites don’t go all SSL. Like the folks at Zscaler in their “Why the web has not switched to SSL-only yet? post. Sorry, with the exception of one issue, that’s all crap. Server overhead? Hogwash. Gmail proved that’s a load of the brown stuff. Increased latency? Where? Crap. How SSL impacts content delivery networks (mostly in terms of certificate integrity) is

Share:
Read Post

Datum Entanglement

I’m hanging out in the Red Carpet Club at the Orlando airport, waiting to head home from the Cloud Security Alliance Congress. Yesterday Chris Hoff and I presented a three part series – first our joint presentation on disruptive innovation and cloud computing (WINnovation), then his awesome presentation on cloud computing infrastructure security issues (and more: Cloudinomicon), and finally Quantum Datum, my session on information-centric security for cloud computing. It was one of the most complex presentations I’ve ever put together in terms of content and delivery, and the feedback was pretty positive, with a few things I need to fix. Weirdly enough I was asked for more of the esoteric content, and less of the practical, which is sorta backwards. I enjoy the esoteric, but try not to do too much of it because we analyst types already have a reputation for forgetting about the real world. While I don’t intend to blog the entire presentation, and the slides don’t make sense without the talk, I’m going to break out some of the interesting bits as separate posts. As you can imagine from the title, the ‘theme’ was quantum mechanics, which provides some great metaphors for certain information-centric security issues. One of the most fascinating bits about quantum mechanics is the concept of quantum entanglement, sometimes called “spooky action at a distance”. Assuming you trust a history major to talk quantum physics, quantum entanglement is a phenomena that emerges due to the wave-like nature of subatomic particles. Things like electrons don’t behave like marbles, but more like a cross between a marble and a wave. They exhibit characteristics of both particles and waves. One of those is that you can split certain particles into smaller particles, each of which is representative of a different part of the parent wave function. For example, after the split you end up with one piece with an ‘up’ spin, and another with a ‘down’ spin, but never two ups or two downs. You can then separate these particles over a distance, and measuring the state of one instantly collapses the wave function and determines the state of the other. Thus you can instantly affect state across arbitrary distances – but it doesn’t violate the speed of light because technically no information is transferred. This is an interesting metaphor for data loss. If I have a given datum (the singular of ‘data’), the security state of any copy of that datum is affected by the state of all other copies of that datum. Well, sort of. Unlike with quantum entanglement, this is a one-way function. The security state of any datum can only decrease the security of all the rest, never increase it. This is why data loss is such an intractable problem. The more copies of a given datum (which could be a single number, or a 2-hour-long movie), the greater the probability of a security failure (assuming distribution) and the weaker overall relative security becomes. If one copy leaks, considering the interconnectivity of the Internet, that single copy is now potentially available and thus the security of all the other copies is reduced. This is really a stupidly complex way of saying that your overall security of a given datum is no greater than the weakest security of any copy. Now think in practical terms. It doesn’t matter how secure your database server is if someone can run a query, extract the data, dump it into an Excel spreadsheet, and email it. I believe the scientific term for this is ‘bummer’. Share:

Share:
Read Post

Rethinking Security

Security is broken. Captain Obvious here. We all know that but it doesn’t really help, does it? I came across a good post by Bobby Dominguez, who I met through Shimmy (but I won’t hold that against Bobby), which talks about rethinking security. To provide the proper context check out this excerpt, which beautifully highlights our futility: While all good security practitioners employ risk management techniques to protect the enterprise, we still can only get funding as an after-the-fact remediation. When we do get mitigation funding we deploy technologies that reduce impact or the likelihood of an event occurring. But these events are based on existing threats and the threats are evolving faster than point-solutions can be produced. Wow. That hits me like a kidney punch. You? Basically we aren’t getting it done and the game (as it’s laid out today) is stacked against us. So we need to change the game and Bobby has a few ideas on how to do that. The good news is that much of what he’s saying here are the cornerstones of what Securosis has been preaching for years, and I’ll use our terms to describe Bobby’s points. Information-centric security: Yes, focus on what needs to be protected rather than an infrastructure-based security model with appliances layered upon layer… This is the hard path. You get no credit when you still have to layer on those appliances because of compliance mandates. But still, if you want to have any chance, you need to start thinking about protecting the data, not just the devices. Trust no one: There is no insider or outsider any more. They are all threats, and must be treated as such. That means embracing things like user activity monitoring and checking for anomalous behavior. And that even applies to you. Separation of duties is a good thing. Embrace the commodities: Bobby talks sense about treating mature security technologies as the commodities they are. Why buy premium AV when they all suck (relatively) equally? Things like firewalls and IDS, and a bunch of other stuff, fit into the same category. That doesn’t mean there aren’t some capabilities that break commodity gear out of commodity status (like application aware firewalls), but for the most part focus your spending on technologies that will protect the most valuable stuff – that generally means focusing on the application layer. React Faster and Better: Despite Bobby’s rather abstract analogy about treating your network like a human body (so I should ply it with beer and other hallucinogens to make daily existence tolerable, right?), Bobby’s point is that we are already compromised. So focus your antibodies (security defenses) on figuring out where & how you are sick and attacking the infection. Yes, Rich and I are writing about that right now, so you have plenty of context for this concept. All told, I think Bobby does a good job of underscoring the fact that the status quo is dead, whether you want to believe it or not. There are some things we have to do because of old-line thinking and compliance mandates, but putting those requirements within the context of a different mindset can make a huge difference. Share:

Share:
Read Post

Incident Response Fundamentals: Mop up, Analyze, and QA

You did well. You followed your incident response plan and the fire is out. Too bad that was the easy part, and you now get to start the long journey from ending a crisis all the way back to normal. If we get back to our before, during, and after segmentation, this is the ‘after’ part. In the vast majority of incidents the real work begins after the immediate incident is over, when you’re faced with the task of returning operations to status quo ante, finding out the root cause of the problem, and putting controls in place to ensure it doesn’t happen again. The after part of the process consists of three phases (Mop up, Analyze, and Q/A), two of which overlap and can be performed concurrently. And remember – we are describing a full incident response process and tend to use major situations in our examples, but everything we are talking about scales down for smaller incidents too, which might be managed by a single person in a matter of minutes or hours. The process should scale both up and down, depending on the severity and complexity of an incident, but even dealing with what seems to be the simplest incident requires a structured process. That way you won’t miss anything. Mop up We steal the term “mop up” from the world of firefighting – where cleaning up after yourself may literally involve a mop. Hopefully we won’t need to break out the mops in an IT incident (though stranger things have happened), but the concept is the same – clean up after yourself, and do what’s required to restore normal operations. This usually occurs concurrently with your full investigation and root cause analysis. There are two aspects to mopping up, each performed by different teams: Cleaning up incident response changes: During a response we may take actions that disrupt normal business operations, such as shutting down certain kinds of traffic, filtering email attachments, and locking down storage access. During the mop up we carefully return to our pre-incident state, but only as we determine it’s safe to do so, and some controls implemented during the response may remain in place. For example, during an incident you might have blocked all traffic on a certain port to disable the command and control network of a malware infection. During the mop up you might reopen the port, or open it and filter certain egress destinations. Mop up is complete when you have restored all changes to where you were before the incident, or have accepted specific changes as a permanent part of your standards/configurations. Some changes – such as updating patch levels – will clearly stay, while others – including temporary workarounds – need to be backed out as a permanent solution goes into place. Restoring operations: While the incident responders focus on investigation and cleaning out temporary controls they put in place during the incident, IT operations handles updating software and restoring normal operations. This could mean updating patch levels on all systems, or checking for and cleaning malware, or restoring systems from backup and bringing them back up to date, and so on. The incident response team defines the plan to safely return to operations and cleans up the remnants of its actions, while IT operations teams face the tougher task of getting all the systems and networks where they need to be on a ‘permanent’ basis (not that anything in IT is permanent, but you know what we mean). Investigation and Analysis The initial incident is under control, and operations are being restored to normal as a result of the mop up. Now is when you start in-depth investigation of the incident to determine its root cause and determine what you need to do to prevent a similar incident from happening in the future. Since you’ve handled the immediate problem, you should already have a good idea of what happened, but that’s a far cry from a full investigation. To use a medical analogy, think of it as switching from treating the symptoms to treating the source of the infection. To go back to our malware example, you can often manage the immediate incident even without knowing how the initial infection took place. Or in the case of a major malicious data leak, you switch from containing the leak and taking immediate action against the employee to building the forensic evidence required for legal action, and ensuring the leak becomes an isolated incident, not a systematic loss of data. In the investigation we piece together all the information we collected as part of the incident response with as much additional data we can find, to help produce an accurate timeline of what happened and why. This is a key reason we push heavy monitoring so strongly, as a core process throughout your organization – modern incidents and attacks can easily slip through the gaps of ‘point’ tools and basic logs. Extensive monitoring of all aspects of your environment (both the infrastructure and up the stack), often using a variety of technologies, provides more complete information for investigation and analysis. We have already talked about various data sources throughout this series, so instead of rehashing them, here are a few key areas that tend to provide more useful nuggets of information: Beyond events: Although IDS/IPS, SIEM, and firewall logs are great to help manage an ongoing incident, they may provide an incomplete picture during your deeper investigation. They tend to only record information when they detect a problem, which doesn’t help much if you don’t have the right signature or trigger in place. That’s where a network forensics (full network packet capture) solution comes in – by recording everything going on within the network, these devices allow you to look for the trails you would otherwise miss, and piece together exactly what happened using real data. System forensics: Some of the most valuable tools for analyzing servers and endpoints are system forensics tools. OS and application logs are all too easy to fudge during an attack. These tools are also

Share:
Read Post

What You Need to Know about DLP for PCI 2.0

As I mentioned in my PCI 2.0 post, one of the new version’s most significant changes is that organizations now must not only confirm that they know where all their cardholder data is, but document how they know this and keep it up to date between assessments. You can do this manually, for now, but I suspect that won’t work except in the most basic environments. The rest of you will probably be looking at using Data Loss Prevention for content discovery. Why DLP? Because it’s the only technology I know of that can accurately and effectively gather the information you need. For more details (much more detail) check out my big DLP guide. For those of you looking at DLP or an alternate technology to help with PCI 2.0, here are some things to look for: A content analysis engine able to accurately detect PAN data. A good regular expression is a start, although without some additional tweaking that will probably result in a lot of false positives. Potentially a ton… The ability to scan a variety of storage types – file shares, document management systems, and whatever else you use. For large repositories, you’ll probably want a local agent rather than pure network scanning for performance reasons. It really depends on the volume of storage and the network bandwidth. Worst case, drop another NIC into the server (whatever is directly connected to the storage) and connect it via a subnet/private network to your scanning tool. Whatever you get, make sure it can examine common file types like Office documents. A text scanner without a file cracker can’t do this. Don’t forget about endpoints – if there’s any chance they touch cardholder data, you’ll probably be told to either scan a sample, or scan them all. An endpoint DLP agent is your best bet – even if you only run it occasionally. Few DLP solutions can scan databases. Either get one that can, or prepare yourself to manually extract to text files any database that might possibly come into scope. And pray your assessor doesn’t want them all checked. Good reporting – to save you time during the assessment process. DLP offers a lot more, but if all you care about is handling the PCI scope requirement, these are the core pieces and features you’ll need. Another option is to look at a service, which might be something SaaS based, or a consultant with DLP on a laptop. I’m pretty sure there won’t be any shortage of people willing to come in and help you with your PCI problems… for a price. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.