Securosis

Research

An Analyst Conundrum

Since we’ve jumped on the Totally Transparent Research bandwagon, sometimes we want to write about how we do things over here, and what leads us to make the recommendations we do. Feel free to ignore the rest of this post if you don’t want to hear about the inner turmoil behind our research… One of the problems we often face as analysts is that we find ourselves having to tell people to spend money (and not on us, which for the record, we’re totally cool with). Plenty of my industry friends pick on me for frequently telling people to buy new stuff, including stuff that’s sometimes considered of dubious value. Believe me, we’re not always happy heading down that particular toll road. Not only have Adrian and I worked the streets ourselves, collectively holding titles ranging from lowly PC tech and network admin to CIO, CTO, and VP of Engineering, but as a small business we maintain all our own infrastructure and don’t have any corporate overlords to pick up the tab. Besides that, you wouldn’t believe how incredibly cheap the two of us are. (Unless it involves a new toy.) I’ve been facing this conundrum for my entire career as an analyst. Telling someone to buy something is often the easy answer, but not always the best answer. Plenty of clients have been annoyed over the years by my occasional propensity to vicariously spend their money. On the other hand, it isn’t like all our IT is free, and there really are times you need to pull out the checkbook. And even when free software or services are an option, they might end up costing you more in the long run, and a commercial solution may come with the lowest total cost of ownership. We figure one of the most important parts of our job is helping you figure out where your biggest bang for the buck is, but we don’t take dispensing this kind of recommendation lightly. We typically try to hammer at the problem from all angles and test our conclusions with some friends still in the trenches. And keep in mind that no blanket recommendation is best for everyone and all situations- we have to write for the mean, not the deviation. But in some areas, especially web application security, we don’t just find ourselves recommending a tool- we find ourselves recommending a bunch of tools, none of which are cheap. In our Building a Web Application Security series we’ve really been struggling to find the right balance and build a reasonable set of recommendations. Adrian sent me this email as we were working on the last part: I finished what I wanted to write for part 8. I was going to finish it last night but I was very uncomfortable with the recommendations, and having trouble justifying one strategy over another. After a few more hours of research today, I have satisfied my questions and am happy with the conclusions. I feel that I can really answer potential questions of why we recommend this strategy opposed to some other course of action. I have filled out the strategy and recommendations for the three use cases as best I can. Yes, we ended up having to recommend a series of investments, but before doing that we tried to make damn sure we could justify those recommendations. Don’t forget, they are written for a wide audience and your circumstances are likely different. You can always call us on any bullshit, or better yet, drop us a line to either correct us, or ask us for advice more fitting to your particular situation (don’t worry, we don’t charge for quick advice – yet). Share:

Share:
Read Post

Do You Use DLP? We Should Talk

As an analyst, I’ve been covering DLP since before there was anything called DLP. I like to joke that I’ve talked with more people that have evaluated and deployed DLP than anyone else on the face of the planet. Yes, it’s exactly as exciting as it sounds. But all those references were fairly self-selected. They’ve either been Gartner clients, or our current enterprise clients, that were/are typically looking for help in product selection or dealing with some sort of problem. Many of the rest are vendor-supplied references. This combination skews the conversations towards people picking products, people with problems, or those a vendor think will make them look good. I’m currently working on an article for Information Security magazine on “Real-World DLP”, and I’m hunting for some new references to expand that field a bit. If you are using DLP, successfully or not, and are willing to talk confidentially, please drop me a line. I’m looking for real-world stories, good and bad. If you are willing to go on the record, we’re also looking for good quote sources. The focus of the article is more on implementation than selection, and will be vendor-neutral. To be honest, one reason I’m putting this out in the open is to see if my normal reference channels are skewed. It’s time to see how our current positions and assumptions play out on the mean streets of reality. Of course I’ll be totally pissed if I’ve been wrong this entire time and have to retract everything I’ve ever written on DLP. **Update – Oh yeah, my email address is rmogull, that is with two ‘L’s, at securosis dot com. Please let me know. Share:

Share:
Read Post

The Business Justification for Data Security: Understanding Potential Loss

Rich posted the full research paper last week, but as not everyone wants to read the full 30 pages, we decided to continue posting excepts here. We still encourage comments as this will be a living document for us, and we will expand in the future. Here is Part Four: Understanding Potential Losses Earlier we deliberately decoupled potential losses from risk impact, even though loss is clearly the result of a risk incident. Since this is a business justification model rather than a risk management model, it allows us to account for major types of potential loss that are the result of multiple types of risk and simplifies our overall analysis. We will highlight the major loss categories associated with data security and, as with our risk estimates, break them out into quantitative and qualitative categories. These loss categories can be directly correlated back to risk estimates, and it may make sense to walk through that exercise at times, but as we complete our business justification you’ll see why it isn’t normally necessary. If data is stolen in a security breach, will it cost you a million dollars? A single dollar? Will you even notice? Under “Data Loss Models”, we introduced a method for estimating values of the data your company possess to underscore what is at stake. Now we will provide a technique for estimating costs to the business in the event of a loss. We look at some types of loss and their impacts. Some of these have hard costs that can be estimated with a reasonable degree of accuracy. Others are more nebulous, so assigning monetary values doesn’t make sense. But don’t forget that although we may not be able to fully quantify such losses, we cannot afford to ignore them them, because unquantifiable costs can be just as damaging. Quantified vs. Qualified Losses As we discussed with noisy threats, it is much easier to justify security spending based on quantifiable threats with a clear impact on productivity and efficiency. With data security, quantification is often the rare exception, and real losses typically combine quantified and qualified elements. For example, a data breach at your company may not be evident until long after the fact. You don’t lose access to the data, and you might not suffer direct losses. But if the incident becomes public, you could then face regulatory and notification costs. Stolen customer lists and pricing sheets, stolen financial plans, and stolen source code can all reduce competitive advantage and impact sales — or not, depending on who stole what. Data stolen from your company may be used to commit fraud, but the fraud itself might be committed elsewhere. Customer information used in identity theft causes your customers major hassles, and if they discover your firm was the source of the information, you may face fines and legal battles over liability. As these can account for a majority of total costs, despite the difficulty in obtaining an estimate of the impact, we must still account for the potential loss to justify spending to prevent or reduce it. We offer two approaches to combining quantified and qualified potential losses. In the first, you walk through each potential loss category and either assign an estimated monetary value, or rate it on our 1-5 scale. This method is faster, but doesn’t help correlate the potential loss with your tolerance. In the second method, you create a chart like the one below, where all potential losses are rated on a 1-5 scale, with either value ranges (for quantitative loss) in the cells, or qualitative statements describing the level of loss. This method takes longer, since you need to identify five measurement points for each loss category, but allows you to more easily compare potential losses against your tolerance, and identify security investments to bring the potential losses (or their likelihood) down to an acceptable level. Loss = 1 2 3 4 5 Notification costs (total, not per record) $0-$1000 $1,001-$10,000 $10,001-$100,000 $100,001-$500,000 >$500,00 Reputation Damage No negative publicity Single negative press mention, local/online only Ongoing negative press <2 weeks, local/online only, Single major outlet mention. Ongoing sustained negative press >2 weeks, including multiple major outlets. Measurable drop in customer activity. Sustained negative press in major outlets or on a national scale. Material drop in customer activity. Potential Loss Categories Here are our recommended assessment categories for potential loss, categorized by quantifiable vs. only qualifiable: Quantifiable potential data security losses: Notification Costs: CA 1386 and associated state mandates to inform customers in the event of a data breach. Notification costs can be estimated in advance, and include contact with customers, as well as any credit monitoring services to identify fraudulent events. The cost is roughly linear with the total number of records compromised. Compliance Costs: Most companies are subject to federal regulations or industry codes they must adhere to. Loss of data and data integrity issues are generally violations. HIPAA, GLBA, SOX, and others include data verification requirements and fines for failure to comply. Investigation & Remediation Costs: An investigation into how the data was compromised, and the associated costs to remediate the relevant security weaknesses, have a measurable cost to the organization. Contracts/SLAs: Service level agreements about quality or timeliness of services are common, as are confidentiality agreements. Businesses that provide data services rely upon the completeness, accuracy, and availability of data; falling short in any one area may violate SLAs and/or subject the company to fines or loss of revenues. Credit: Loss of data and compromise of IT systems are both viewed as indications of investment risk by the financial community. The resulting impact on interest rates and availability of funds may affect profitability. Future Business & Accreditation: Data loss, compliance failures, or compliance penalties may impair ability to bid on contracts or even participate in certain ventures due to loss of accreditation. This can be a permanent or temporary loss, but the effects are tangible. Note that future business is also a qualitative loss — here we

Share:
Read Post

Database Security for DBAs

I think I’ve discovered the perfect weight loss technique- a stomach virus. In 48 hours I managed to lose 2 lbs, which isn’t too shabby. Of course I’m already at something like 10% body fat, so I’m not sure how needed the loss was, but I figure if I just write a book about this and hock it in some informercial I can probably retire. My wife, who suffered through 3 months of so-called “morning” sickness, wasn’t all that sympathetic for some strange reason. On that note, it’s time to shift gears and talk about database security. Or, to be more accurate, talk about talking about database security. Tomorrow (Thursday Feb 5th) I will be giving a webcast on Database Security for Database Professionals. This is the companion piece to the webinar I recently presented on Database Security for Security Professionals. This time I flip the presentation around and focus on what the DBA needs to know, presenting from their point of view. It’s sponsored by Oracle, presented by NetworkWorld, and you can sign up here. I’ll be posting the slides after the webinar, but not for a couple of months as we reorganize the site a bit to better handle static content. Feel free to email me if you want a PDF copy. Share:

Share:
Read Post

Friday Summary: February 6, 2009

Here it is Friday again, and it feels like just a few minutes ago that I was writing the last Friday summary. This week has been incredibly busy for both of us. Rich has been out for the count most of this week with a stomach virus and wandering his own house like a deranged zombie. This was not really a hack, they were just warning Rich’s neighborhood. As the county cordoned off his house with yellow tape and flagged him as a temporary bio-hazard, I thought it best to forgo this week’s face to face Friday staff meeting, and get back on track with our blogging. Between the business justification white paper that we launched this week, and being on the road for client meetings, we’re way behind. A few items of interest … I appears that data security is really starting to enter the consciousness of the common consumer. Or at least it is being marketed to them. There were even more advertisements in the San Jose airport this week than ever: The ever-present McAfee & Cisco ads were joined by Symantec and Compuware. The supermarket has Identity Theft protection pamphlets from not one but two vendors. The cherry on top of this security sundae was the picture of John Walsh in the in-flight magazine, hawking “CA Internet Security Suite Plus 2009”. I was shocked. Not because CA has a consumer security product; or because they are being marketed along with Harry Potter commemorative wands, holiday meat platters and low quality electronics. No, it was John Walsh’s smiling face that surprised me. Because John Walsh “Trusts CA Security Products to keep your family safe”. WTF? Part of me is highly offended. The other part of me thinks this is absolutely brilliant marketing. We have moved way beyond a features and technology discussion, and into JV-celebrities telling us what security products we should buy. If it were only that easy. Myopia Alert: I am sure there are others in the security community who have gone off on this rant as well, but as I did not find a reference anywhere else, I thought this topic was worth posting. A word of caution for you PR types out there who are raising customer and brand awareness through security webinars and online security fora: you might want to have some empathy for your audience. If your event requires a form to be filled out, you are going to lose a big chunk of your audience because people who care about security care about their privacy as well. The audience member will bail, or your outbound call center will be dialing 50 guys named Jack Mayhoff. Further, if that entry form requires JavaScript and a half dozen cookies, you are going to lose a bigger percentage of your audience because JavaScript is a feature and a security threat rolled into one. Finally, if the third-party vendor you use to host the event does not support Firefox or Safari, you will lose the majority of your audience. I am not trying to be negative, but want to point out that while Firefox, Safari and Opera may only constitute 25% of the browser market, they are used by 90% of the people who care about security. Final item I wanted to talk about: Our resident wordsmith and all around good guy Chris Pepper forwarded Rich and me a Slashdot link about how free Monty Python material on YouTube has caused their DVD sales to skyrocket. Both Rich and I have raised similar points here in the past, and we even referred to this phenomena in the Business Justification for Security Spending paper about why it can be hard to understand damages. While organizations like the RIAA feel this is counter-intuitive, it makes perfect sense to me and anyone else who has ever tried guerilla marketing, or seen the effects of viral marketing. Does anyone know if the free South Park episodes did the same for South Park DVD sales? I would be interested. Oh, and Chris also forwarded Le Wrath di Kahn, which was both seriously funny and really works as opera (the art form- I didn’t test in the browser). On to the week in review: Webcasts, Podcasts, Outside Writing, and Conferences: Rich did a webcast with Roxana Bradescu of Oracle on Information Security for Database Professionals. Here is the sign-up link, and I will post a replay link later when I get one from Oracle. Favorite Securosis Posts: Rich: Launch of the Business Justification for Security Spending white paper. Whew! Adrian: The Business Justification for Data Security post on Risk Estimation. I knew this was going to cause some of the risk guys to go apoplectic, but we were not building a full-blown risk management model, and frankly, risk calculations made every model so complex no one could use it as a tool. Favorite Outside Posts: Adrian: Informative post by Robert Graham on Shellcode in software development. Write once, run anywhere malware? Anyone? Rich: XKCD was a riot. [What my friend John Kelsey used to call “Lead Pipe Cryptanalysis’ ] Top News and Posts: Nine million, in cold hard cash, stolen from ATM’s around the world. Wow. I will be blogging more on this in the future: Symantec and Ask.com joint effort. Marketing hype or real consumer value? Very informative piece on how assumptions of what should be secured and what we can ignore are often the places where we fail. Addicted to insecurity. At least 600k US jobs lost in January. Google thought everyone was serving malware. This is an atrocious practice: the EULA tells you you can’t use your firewall, and they can take all your bandwidth. RBS breach was massive, and fast. Blog Comment of the Week: From Chris Hayes on the Risk Estimation for Business Justification for Data Security post: Up to this point in the series, this “business justification” security investment model appears to be nothing more then a glorified cost benefit analysis wrapped up in risk/business

Share:
Read Post

The Business Justification for Data Security- Version 1.0

We’ve been teasing you with previews, but rather than handing out more bits and pieces, we are excited to release the complete version of the Business Justification for Data Security. This is version 1.0 of the report, and we expect it to continue to evolve as we get more public feedback. Based on some of that initial feedback, we’d like to emphasize something before you dig in. Keep in mind that this is a business justification tool, designed to help you align potential data security investments with business needs, and to document the justification to make a case with those holding the purse strings. It’s not meant to be a complete risk assessment model, although it does share many traits with risk management tools. We’ve also designed this to be both pragmatic and flexible- you shouldn’t need to spend months with consultants to build your business justification. For some projects, you might complete it in an hour. For others, maybe a few days or weeks as you wrangle business unit heads together to force them to help value different types of information. For those of you that don’t want to read a 38 page paper we’re going to continue to post the guts of the model as blog posts, and we also plan on blogging additional content, such as more examples and use cases. We’d like to especially thank our exclusive sponsor, McAfee, who also set up a landing page here with some of their own additional whitepapers and content. As usual, we developed the content completely independently, and it’s only thanks to our sponsors that we can release it for free (and still feed our families). This paper is also released in cooperation with the SANS Institute, will be available in the SANS Reading Room, and we will be delivering a SANS webcast on the topic on March 17th. This was one of our toughest projects, and we’re excited to finally get it out there. Please post your feedback in the comments, and we will be crediting reviewers that advance the model when we release the next version. And once again, thanks to McAfee, SANS, and (as usual) Chris Pepper, our fearless editor. Share:

Share:
Read Post

The Business Justification for Data Security: Risk Estimation

This is the third part of our Business Justification for Data Security series (Part 1, Part 2), and if you have been following the series this far, you know that Rich and I have complained about how difficult this paper was to write. Our biggest problem was fitting risk into the model. In fact we experimented and ultimately rejected a couple models because the reduction of risk vs. any given security investment was non-linear. And there were many threats and many different responses, few of which were quantifiable, making the whole effort ‘guestimate’ soup. In the end , risk became our ‘witching rod’; a guide as to how we balance value vs loss, but just one of the tools we use to examine investment decisions. Measuring and understanding the risks to information If data security were a profit center, we could shift our business justification discussion from the value of information right into assessing its potential for profit. But since that isn”t the case, we are forced to examine potential reductions in value as a guide to whether action is warranted. The approach we need to take is to understand the risks that directly threaten the value of data and the security safeguards that counter those risks. There’s no question our data is at risk; from malicious attackers and nefarious insiders to random accidents and user errors, we read about breaches and loss nearly every day. But while we have an intuitive sense that data security is a major issue, we have trouble getting a handle on the real risks to data in a quantitative sense. The number of possible threats and ways to steal information is staggering, but when it comes to quantifying risks, we lack much of the information needed for an accurate understanding of how these risks impact us. Combining quantitative and qualitative risk estimates We”ll take a different approach to looking at risk; we will focus on quantifying the things that we can, qualifying the things we can”t, and combining them in a consistent framework. While we can measure some risks, such as the odds of losing a laptop, it’s nearly impossible to measure other risks, such as a database breach via a web application due to a new vulnerability. If we limit ourselves only to what we can precisely measure, we won”t be able to account for many real risks to our information. Inclusion of quantitative assessments, since they are a powerful tool to understand risk and influence decisions, help validate the overall model. For our business justification model, we deliberately simplify the risk assessment process to give us just what we need to understand need for data security investments. We start by listing out the pertinent risk categories, then the likelihood or annual rate of occurrence for each risk, followed by severity ratings broken out for confidentiality, integrity, and availability. For risk events we can predict with reasonable accuracy, such as lost laptops with sensitive information, we can use numbers. In the example below, we know the A ualized Rate of Occurrence (ARO), so we plug with value in. For less predictable risks, we just rate them from “low” to “high”. We then mark off our currently estimated (or measured) levels in each category. For qualitative measure, we will use a 1-5 scale to , but this is arbitrary, and you should use whatever scale that provides you with a level of granularity that assists understanding. Risk Estimation: Credit Card Data (Sample): < p style=”font: 12.0px Helvetica; min-height: 14.0px”> Impact Risk Likelihood/ARO C I A Total Lost Laptop 43 4 1 3 51 Database Breach (Ext) 2 5 3 2 12 This is the simplified risk scorecard for the business justification model. The totals aren”t meant to compare one risk category to another, but to derive estimated totals we will use in our business justification to show potential reductions from the evaluated investment. While different organizations face different risk categories, we”ve included the most common data security risks here, and in Section 6 we show how it integrates into the overall model. Common data security risks The following is an outline of the major categories for information loss. Any time you read about a data breach, one or more of these events occurred. This list isn”t intended to comprehensive, rather provide a good overview of common data security risk categories to give you a jump start on implementing the model. Rather than discuss each and every threat vector, we will present logical groups to illustrate that the risks and potential solutions tend to be very similar within each specific category. The following are the principal categories to consider: Lost Media This category describes data at rest, residing on some form of media, that has been lost or stolen. Media includes disk drives, tape, USB/memory sticks, laptops, and other devices. This category encompasses the majority of cases of data loss. Typical security measures for this class includes media encryption, media “sanitizing”, and in some cases endpoint Data Loss Prevention technology. Lost disks/backup tape Lost/stolen laptop. Information leaked through decommissioned servers/drives Lost memory stick/flash drive Stolen servers/workstations Inadvertent Disclosure This category includes data being accidentally exposed in some way that leads to unwanted disclosure. Examples include email to unwanted recipients, posting confidential data to web sites, unsecured Internet transmissions, lack of access controls, and the like. Safeguards include email & web security platforms, DLP and access controls systems. Each is effective, but only against certain threat types. Process and workflow controls are also needed to help catch human error. Data accidentally leaked through email (Sniffed, wrong address, un-purged document metadata) Data leaked by inadvertent exposure (Posted to the web, open file shares, unprotected FTP, or otherwise placed in an insecure location) Data leaked by unsecured connection Data leaked through file sharing File sharing programs are used to move large files efficiently (and possibly illegally). External Attack/Breach This category describes instances of data theft where company systems and applications are compromised by a malicious attacker, affecting confidentiality

Share:
Read Post

Friday Summary – Jan 30, 2009

A couple of people forwarded me this interview, and if you have not read it, it is really worth your time. It’s an amazing interview with Matt Knox, a developer with Direct Revenue who authored adware during his employ with them. For me this is important as it highlights stuff I figured was going on but really could not prove. It also exposes much of the thought process behind the developers at Micosoft, and it completely altered my behavior for ’sanitizing’ my PC’s. For me, this all started a few years ago (2005?) when my Windows laptop was infected with this stuff. I discovered something was going on because there was ongoing activity in the background when the machine was idle and started to affect machine responsiveness. The mysterious performance degradation was difficult to find as I could not locate a specific application responsible, and the process monitors provided with Windows are wholly inadequate. I found that there were processes running in the background unassociated with any application, and unassociated with Windows. I did find files that were associated with these processes, and it was clear they did not belong on the machine. When I went to delete them, they popped up again within minutes- with new names! I was able to find multiple registry entries, and the behavior suggested that multiple pieces of code monitored each other for health and availability, and fixed each other if one was deleted. Even if I booted in safe mode I had no confidence that I could completely remove this … whatever it was … from the machine. At that point in time I knew I needed to start over. How this type of software could have gotten into the registry and installed itself in such a manner was perplexing to me. Being a former OS developer, I started digging, and that’s when I got mad. Mr. Knox uses the word ‘promiscuous’ to describe the OS calls, and that was exactly what it was. There were API calls to do pretty much anything you wanted to do, all without so much as a question being asked of the user or of the installing party. You get a clear picture of the mentality of the developers who wrote the IE and Windows OS code back then- there were all sorts of nifty ways to ‘do stuff’, for anyone who wanted to, and not a shred of a hint of security. All of these ‘features’ were for someone else’s benefit! They could use my resources at will- as if they had the keys to my house, and when I left, they were throwing a giant party at my expense. What really fried me was that, while I could see these processes and registry entries, none of the anti-virus or anti-malware tools would detect them. So if I wanted to secure my machine, it was up to me to do it. So I said this changed my behavior. Here’s how: Formatted the disk and reinstalled the OS Switched to Firefox full time. A few months later I discovered Flashblock and NoScript. I stopped paying for desktop anti-virus and used free stuff or nothing at all. It didn’t work for the desktop, and email AV addressed my real concern. I found a process monitor that gave me detailed information on what was running and what resources were being used. I cateloged every process on the Windows machine, and kept a file that described each process’ function so I could cross-check and remove stuff that was not supposed to be there. I began manually starting everything (non-core) through the services panel if I needed it. Not only did this help me detect stuff that should not be running, it reduced risks associated with poorly secured applications that leave services sitting wide open on a port. Uninstalled WebEx, RealPlayer, and several other suspects after using. I kept all of my original software nearby and planned to re-install, from CD or DVD, fresh every year. Until I got VMware. I used a virtual partition for risky browsing whenever possible. I now use a Mac, and run my old licensed copies of Windows in Parallels. Surprised? Here is the week’s security summary: Webcasts, Podcasts, Outside Writing, and Conferences: Martin & Rich and I talk about the White House homeland security agenda, phishing, and the monster.com security breach on the Network Security Podcast #136. Don’t forget to submit any hacks or exploits for Black Hat 2009 consideration. Favorite Securosis Posts: Rich: Inherent Role Conflicts in National Cyber-security post. Adrian: The post on Policies and Security Products: Something you need to consider in any security product investment. Favorite Outside Posts: < div> Adrian: Rafal’s post on network security: not ready to give up, but surely need to switch the focus. Rich: Like Adrian said, the philosecurity interview with Matt Knox is a really interesting piece. Top News and Posts: Very interesting piece from Hackademics on IE’s “clickjacking protection”. Additional worries about upcoming Conflicker Worm payloads. Can’t be all security: This is simply astounding: Exxon achieves $45 billion in 2008. Not in revenue, in profit. The disk drive companies are marketing built in encryption. While I get a little bristly when marketed as protecting the consumer and it’s going into server arrays, this is a very good idea, and will eventually end up in consumer drives. Yeah! More on DarkMarket and the undercover side of the operation. Police still after culprits on Heartland Breach. Again? Monster.com has another breach. They have a long way to go before they catch Lexis-Nexus, but they’re trying. The Red Herring site has been down all week … wondering if they have succumbed to market conditions. Blog Comment of the Week: Good comment from Jack Pepper on “PCI isn’t meant to protect cardholder …” post: “Why is this surprising? the PCI standard was developed by the card industry to be a “bare minimum” standard for card processing. If anyone in the biz thinks PCI is more that “the bare

Share:
Read Post

The Most Powerful Evidence That PCI Isn’t Meant To Protect Cardholders, Merchants, Or Banks

I just read a great article on the Heartland breach, which I’ll talk more about later. There is one quote in there that really stands out: End-to-end encryption is far from a new approach. But the flaw in today”s payment networks is that the card brands insist on dealing with card data in an unencrypted state, forcing transmission to be done over secure connections rather than the lower-cost Internet. This approach avoids forcing the card brands to have to decrypt the data when it arrives. While I no longer think PCI is useless, I still stand by the assertion that its goal is to reduce the risks of the card companies first, and only peripherally reduce the real risk of fraud. Thus cardholders, merchants, and banks carry both the bulk of the costs and the risks. And here’s more evidence of its fundamental flaws. Let’s fix the system instead of just gluing on more layers that are more costly in the end. Heck, let’s bring back SET! Share:

Share:
Read Post

Heartland Payment Systems Attempts To Hide Largest Data Breach In History Behind Inauguration

Brian Krebs of the Washington Post dropped me a line this morning on a new article he posted. Heartland Payment Systems, a credit card processor, announced today, January 20th, that up to 100 Million credit cards may have been disclosed in what is likely the largest data breach in history. From Brian’s article: Baldwin said 40 percent of transactions the company processes are from small to mid-sized restaurants across the country. He declined to name any well-known establishments or retail clients that may have been affected by the breach. Heartland called U.S. Secret Service and hired two breach forensics teams to investigate. But Baldwin said it wasn’t until last week that investigators uncovered the source of the breach: A piece of malicious software planted on the company’s payment processing network that recorded payment card data as it was being sent for processing to Heartland by thousands of the company’s retail clients. … “The transactional data crossing our platform, in terms of magnitude… is about 100 million transactions a month,” Baldwin said. “At this point, though, we don’t know the magnitude of what was grabbed.” I want you to roll that number around on your tongue a little bit. 100 Million transactions per month. I suppose I’d try to hide behind one of the most historic events in the last 50 years if I were in their shoes. “Due to legal reviews, discussions with some of the players involved, we couldn’t get it together and signed off on until today,” Baldwin said. “We considered holding back another day, but felt in the interests of transparency we wanted to get this information out to cardholders as soon as possible, recognizing of course that this is not an ideal day from the perspective of visibility.” In a short IM conversation Brian mentioned he called the Secret Service today for a comment, and was informed they were a little busy. We’ll talk more once we know more details, but this is becoming a more common vector for attack, and by our estimates is the most common vector of massive breaches. TJX, Hannaford, and Cardsystems, three of the largest previous breaches, all involved installing malicious software on internal networks to sniff cardholder data and export it. This was also another case that was discovered by initially detecting fraud in the system that was traced back to the origin, rather than through their own internal security controls. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.