Securosis

Research

RSAC 2010 Guide: Data Security

Over the next 3 days, we’ll be posting the content from the Securosis Guide to the RSA Conference 2010. We broke the market into 8 different topics: Network Security, Data Security, Application Security, Endpoint Security, Content (Web & Email) Security, Cloud and Virtualization Security, Security Management, and Compliance. For each section, we provide a little history and what we expect to see at the show. Next up is Data Security. Data Security Although technically nearly all of Information Security is directed at protecting corporate data and content, in practice our industry has historically focused on network and endpoint security. At Securosis we divide up the data security world into two major domains based on how users access data – the data center and the desktop. This reflects how data is managed far more practically than “structured” and “unstructured”. The data center includes access through enterprise applications, databases, and document management systems. The desktop includes productivity applications (the Office suite), email, and other desktop applications and communications. What We Expect to See There are four areas of interest at the show relative to data security: Content Analysis: This is the ability of security tools to dig inside files and packets to understand the content inside, not just the headers or other metadata. The most basic versions are generally derived from pattern matching (regular expressions), while advanced options include partial document matching and database fingerprinting. Content analysis techniques were pioneered by Data Loss Prevention (DLP) tools; and are starting to pop up in everything from firewalls, to portable device control agents, to SIEM systems. The most important questions to ask identify the kind of content analysis being performed. Regular expressions alone can work, but result in more false positives and negatives than other options. Also find out if the feature can peer inside different file types, or only analyze plain text. Depending on your requirements, you may not need advanced techniques, but you do need to understand exactly what you’re getting and determine if it will really help you protect your data, or just generate thousands of alerts every time someone buys a collectable shot glass from Amazon. DLP Everywhere: Here at Securosis we use a narrow definition for DLP that includes solutions designed to protect data with advanced content analysis capabilities and dedicated workflow, but not every vendor marketing department agrees with our approach. Given the customer interest around DLP, we expect you’ll see a wide variety of security tools with DLP or “data protection” features, most of which are either basic content analysis or some form of context-based file or access blocking. These DLP features can be useful, especially in smaller organizations and those with only limited data protection needs, but they are a pale substitute if you need a dedicated data protection solution. When talking with these vendors, start by digging into their content analysis capabilities and how they really work from a technical standpoint. If you get a technobabble response, just move on. Also ask to see a demo of the management interface – if you expect a lot of data-related violations, you will likely need a dedicated workflow to manage incidents, so user experience is key. Finally, ask them about directory integration – when it comes to data security, different rules apply to different users and groups. Encryption and Tokenization: Thanks to a combination of PCI requirements and recent data breaches, we are seeing a ton of interest in application and database encryption and tokenization. Tokenization replaces credit card numbers or other sensitive strings with random token values (which may match the credit card format) matched to real numbers only in a central highly secure database. Format Preserving Encryption encrypts the numbers so you can recover them in place, but the encrypted values share the credit card number format. Finally, newer application and database encryption options focus on improved ease of use and deployment compared to their predecessors. You don’t really need to worry about encryption algorithms, but it’s important to understand platform support, management user experience (play around with the user interface), and deployment requirements. No matter what anyone tells you, there are always requirements for application and database changes, but some of these approaches can minimize the pain. Ask how long an average deployment takes for an organization of your size, and make sure they can provide real examples or references in your business, since data security is very industry specific. Database Security: Due partially to acquisitions and partially to customer demand, we are seeing a variety of tools add features to tie into database security. Latest in the hit parade are SIEM tools capable of monitoring database transactions and vulnerability assessment tools with database support. These parallel the dedicated Database Activity Monitoring and Database Assessment markets. As with any area of overlap and consolidation, you’ll need to figure out if you need a dedicated tool, or if features in another type of product are good enough. We also expect to see a lot more talk about data masking, which is the conversion of production data into a pseudo-random but still usable format for development. Share:

Share:
Read Post

RSVP for the Securosis and Threatpost Disaster Recovery Breakfast

We quite enjoy all the free evening booze at the RSA conference, but most days what we’d really like is just a nice, quiet breakfast. Seriously, what’s with throwing massive parties for people to network, then blasting the music so loud that all we can do is stand around and stare at the mostly-all-dude crowd? In response, last year we started up the Disaster Recovery Breakfast, and it went over pretty well. It’s a nice quiet breakfast with plenty of food, coffee, recovery items (aspirin & Tums), and even the hair of the dog for those of you not quite ready to sober up. No marketing, no presentations, no sales types trolling for your card. Sit where you want, drop in and out as much as you want, and if you’re really a traditionalist, blast your iPod and stand in a corner staring at us while nursing a Bloody Mary. This year we will be holding it Thursday morning at Jillian’s in the Metreon from 8-11. It’s an open door during that window, and feel free to stop by at any time and stay as long as you want. We’re even cool if you drive through just to mooch some quick coffee. Please RSVP by dropping us a line at rsvp@securosis.com, and we’ll see you there! Share:

Share:
Read Post

RSAC 2010 Guide: Application Security

Over the next 3 days, we’ll be posting the content from the Securosis Guide to the RSA Conference 2010. We broke the market into 8 different topics: Network Security, Data Security, Application Security, Endpoint Security, Content (Web & Email) Security, Cloud and Virtualization Security, Security Management, and Compliance. For each section, we provide a little history and what we expect to see at the show. Next up is Data Security. Data Security Although technically nearly all of Information Security is directed at protecting corporate data and content, in practice our industry has historically focused on network and endpoint security. At Securosis we divide up the data security world into two major domains based on how users access data – the data center and the desktop. This reflects how data is managed far more practically than “structured” and “unstructured”. The data center includes access through enterprise applications, databases, and document management systems. The desktop includes productivity applications (the Office suite), email, and other desktop applications and communications. What We Expect to See There are four areas of interest at the show relative to data security: Content Analysis: This is the ability of security tools to dig inside files and packets to understand the content inside, not just the headers or other metadata. The most basic versions are generally derived from pattern matching (regular expressions), while advanced options include partial document matching and database fingerprinting. Content analysis techniques were pioneered by Data Loss Prevention (DLP) tools; and are starting to pop up in everything from firewalls, to portable device control agents, to SIEM systems. The most important questions to ask identify the kind of content analysis being performed. Regular expressions alone can work, but result in more false positives and negatives than other options. Also find out if the feature can peer inside different file types, or only analyze plain text. Depending on your requirements, you may not need advanced techniques, but you do need to understand exactly what you’re getting and determine if it will really help you protect your data, or just generate thousands of alerts every time someone buys a collectable shot glass from Amazon. DLP Everywhere: Here at Securosis we use a narrow definition for DLP that includes solutions designed to protect data with advanced content analysis capabilities and dedicated workflow, but not every vendor marketing department agrees with our approach. Given the customer interest around DLP, we expect you’ll see a wide variety of security tools with DLP or “data protection” features, most of which are either basic content analysis or some form of context-based file or access blocking. These DLP features can be useful, especially in smaller organizations and those with only limited data protection needs, but they are a pale substitute if you need a dedicated data protection solution. When talking with these vendors, start by digging into their content analysis capabilities and how they really work from a technical standpoint. If you get a technobabble response, just move on. Also ask to see a demo of the management interface – if you expect a lot of data-related violations, you will likely need a dedicated workflow to manage incidents, so user experience is key. Finally, ask them about directory integration – when it comes to data security, different rules apply to different users and groups. Encryption and Tokenization: Thanks to a combination of PCI requirements and recent data breaches, we are seeing a ton of interest in application and database encryption and tokenization. Tokenization replaces credit card numbers or other sensitive strings with random token values (which may match the credit card format) matched to real numbers only in a central highly secure database. Format Preserving Encryption encrypts the numbers so you can recover them in place, but the encrypted values share the credit card number format. Finally, newer application and database encryption options focus on improved ease of use and deployment compared to their predecessors. You don’t really need to worry about encryption algorithms, but it’s important to understand platform support, management user experience (play around with the user interface), and deployment requirements. No matter what anyone tells you, there are always requirements for application and database changes, but some of these approaches can minimize the pain. Ask how long an average deployment takes for an organization of your size, and make sure they can provide real examples or references in your business, since data security is very industry specific. Database Security: Due partially to acquisitions and partially to customer demand, we are seeing a variety of tools add features to tie into database security. Latest in the hit parade are SIEM tools capable of monitoring database transactions and vulnerability assessment tools with database support. These parallel the dedicated Database Activity Monitoring and Database Assessment markets. As with any area of overlap and consolidation, you’ll need to figure out if you need a dedicated tool, or if features in another type of product are good enough. We also expect to see a lot more talk about data masking, which is the conversion of production data into a pseudo-random but still usable format for development. Share:

Share:
Read Post

FireStarter: IT-GRC: The Paris Hilton of Unicorns

Like any analyst, I spend a lot of time on vendor briefings and meeting with very early-stage startups. Sometimes it’s an established vendor pushing a new product or widget, and other times it’s a stealth idea I’m evaluating for one of our investor clients. Usually I can tell within a few minutes if the idea has a chance, assuming the person on the other side is capable of articulating what they actually do (an all too common problem). In 2007 I posted on the primary technique I use to predict security markets, and as we approach RSA I’m going to build on that framework with one of my favorite examples: IT-GRC. IT-GRC (governance, risk, and compliance) products promise a wonderland of compliance bliss. Just buy this very expensive product – which typically requires major professional services to implement, and all your business units to buy-in and participate – and all your risk and compliance problems will go away. Your CEO and CIO get a kick-ass dashboard that allows him or her to assess all your risk and compliance issues across IT, and you can have all the reports your auditor could ever ask for with the press of a button. Uh-huh. Right. Because that always works so well, just like ERP. Going back to my framework for predicting security markets, there are three classes of markets: Threat/Response – Things that keep your customer website from being taken down, ensure people can surf during lunch, and keep the CEO from asking what’s wrong with his or her email. All those other threats? They don’t matter. Compliance – Something mandated by your auditor or assessor, with financial penalties if you don’t comply. And those penalties have to cost more than the solution. Internal Motivation/Efficiency – Things that help you do your job better and improve efficiency with corresponding cost savings. The vast majority of security spending is in response to noisy, in-your-face threats that disrupt your business (someone stealing your data doesn’t count, unless they burn the barn behind them). The rest deals with compliance mandates and deficiencies. I think we only spend single-digit percentages of our security budget on anything else, maybe. So let’s look at IT-GRC. It doesn’t directly stop any threats and it’s never mandated for compliance. It’s a reporting and organization tool – and a particularly expensive one. Thus we only see it succeeding in the largest of large companies, where it shows a financial return by reducing the massive manual costs of reporting. Mid-sized and small companies simply aren’t complex enough to see the same level of benefits, and the cost of implementation alone (never mind the typically 6-figure product costs) aren’t justified by the benefit. IT-GRC in most organizations is like chasing Paris Hilton the Unicorn. It’s expensive and high-maintenance, with mythical benefits – and unless you have some serious bank, it isn’t worth the chase. That’s not my assessment – it’s a statement of the realities of the market. I don’t even have to declare GRC dead (not that I’m against that). If you have any contacts in one of these companies – someone who will tell you the honest truth – you know that these products don’t make sense for mid-sized and small companies. This post isn’t an assessment of value – it’s a statement of execution. In other words, this isn’t my opinion – the numbers speak for themselves. All you end users reading this already know what I’m saying, since none of you are buying the products anyway. Share:

Share:
Read Post

Friday Summary: February 19, 2010

I’d like some fail, with a little fail, and a side of fail. Rothman was out in Phoenix this week for some internal meetings and to record some video segments that we will be putting out fairly soon. I have a slightly weird video recording and production setup, designed to make it super-fast and dirt easy for us to put segments together. I’ve tested most of it before, although I did add a new time saver right before Mike showed up. Yeah, you know where this is headed. First, the new thing didn’t work. It was so frustrating that we almost ran out and bought a new camera so we wouldn’t need the extra box. Actually, we did run out, but it turns out almost no consumer cameras with high def have FireWire anymore. I dropped back into troubleshooting and debugging mode once I realized we were stuck. My personal process is first to eliminate as many variables as possible, and then slowly add one function or component at a time until I can identify where the failure is. Rip it back to the frame, then build and test piece by piece. That didn’t work. So I moved on to option 2, which has helped me more in my IT career than I care to admit (in my tech days I was the one they pulled in when no one else could get something to work). It’s no big secret – I just screw with it until the problem goes away. I try all sorts of illogical stuff that shouldn’t work, and usually does. I call this “sacrificing a chicken” mode. I toss out all assumptions as to how a computer system should work, and just start mashing the keys in some barely-logical way. I figure there are so many layers of abstraction and so many interconnections in modern software, that it is nearly impossible to completely model and predict how things will really work. It totally worked. With that up and running, the next bit failed. The software we use to live mix the video couldn’t handle our feeds, even though our setup is well within the performance expectations and recommendations. We use BoinxTV, but it was effectively useless on a tricked out MacBook Pro. That one I couldn’t fix. No prob – I had a backup plan. Record the video, then edit/mix on my honking Mac Pro with 12gb of RAM and 8 core. You really know where this is headed. Despite the fact I’ve done this before with test footage, using the exact same process, it didn’t work. Something about the latest version of Boinx. So I restored the old version using Time Machine, and it still wouldn’t work. Oh, and then there’s the part where my Mac suddenly informed me it was missing memory (fixed with a re-seating, but still annoying). I’ve sent 2 tech support requests in, but no responses yet. Had this happened pre-Macworld Expo, I could have cornered them on the show floor. Ugh. My wife came up with one last option that I haven’t tried yet. Our best guess is that something in one of Apple’s Mac OS X updates caused the problem. She suggested I restore Leopard onto her MacBook and test on that. Better yet – I have spare drives in the Mac Pro to test new versions of operating systems, and there’s no reason I can’t install the old version. I’m also going to upgrade my video card. I don’t expect any of this to work, but I really need to produce these videos, and am not looking forward to the more time consuming traditional process. But for those of you who troubleshoot, my methodology almost always works. Back out to nothing and build/test build/test, or randomly screw with stuff that shouldn’t help, but usually does. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark Reading posts on The Cost of Database Security, and Oracle 0-day fun. Rich’s endpoint DLP deployment tips at TechTarget. Favorite Securosis Posts David Mortman: Network Security Fundamentals: Looking for Not Normal Mike Rothman: Adrian’s paper on DB Assessment Great paper. Really digs into the why and how. Adrian Lane: The VA White Paper, of course! Rich: It was a slow week on the blog with all of us distracted by my video failures, but here’s a nugget from when this was my personal blog, not a business. Security is like dentistry. Other Securosis Posts Incite 2/17/2010 – Open Your Mind Favorite Outside Posts Adrian Lane: The List of Top 25 Most Dangerous Programming Errors. When I first read the post I was thinking it could be re-titled “Why Web Programmers Suck”, but when you get past the first half dozen or so poor coding practices, it could be pretty much any application. And let’s face it, web apps are freaking hard because you cannot trust the user or the user environment. Regardless, print this out and post on the break room wall for the rest of the development team to read every time they get a cup of coffee. Pepper: Urine Sample Hacked? Mike Rothman: No one knows what the F*** they are doing. Awesome post to understand and remind you that you don’t have all the answers. But you had better know what you don’t know. Rich: Rafal reminds people to know who you are giving your data to. He can be a bit reactionary at times, but he nails it with this one. How do you think Facebook and Google make their money? They aren’t evil, but they are what they are. Project Quant Posts Project Quant: Database Security – Masking Research Reports and Presentations Report: Database Assessment Database Audit Events Top News and Posts Got Bluescreen? Check for Rootkits. A very good composite of the Google Attacks. SQL Azure Update 1 available. Adobe issues emergency patch. Security bug in Google Buzz. Chinese hackers at work in India cracking government systems. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Erin (Secbarbie), in response to What is Your Plan B?.

Share:
Read Post

Choose Your Own Whitepaper Adventure (and Upcoming Papers)

We are in the process of finalizing some research planning for the next few months, so I want to see if there are any requests for research out there. First, here are some papers we anticipate completing over the next 3 months: Understanding and Selecting a Database Encryption or Tokenization Solution Understanding and Selecting a Database Assessment Solution Project Quant for Database Security Quick Wins with DLP Pragmatic Data Security Network Security Fundamentals Endpoint Security Fundamentals Understanding and Selecting a SIEM/Log Management Product Understanding and Implementing Network Segregation Data Security for the Cloud Some of these are sponsored, some aren’t, and all will be released for free under a Creative Commons license. But we’d also like to know if there are any areas you’d like to see us develop. What the heck – since we give it away for free, you might as well take advantage of us. The one area we aren’t ready to cover yet is identity management, but anything else is open. Seriously – use us. We like it. Oh, yeah. Share:

Share:
Read Post

Counterpoint: Admin Rights Don’t Matter the Way You Think They Do

Update – Based on feedback, I failed to distinguish that I’m referring to normal users running as admin. Sysadmins and domain admins definitely shouldn’t be running with their admin privileges except for when they need them. As you can read in the comments, that’s a huge risk. When I was reviewing Mike’s FireStarter on yanking admin rights from users, it got me thinking on whether admin rights really matter at all. Yes, I realize this is a staple of security dogma, but I think the value of admin rights is completely overblown due to two reasons: There are plenty of bad things an attacker can do in userland without needing admin rights. You can still install malware and access everything the user can. Lack of admin privileges is little more than a speed bump (if even that) for many kinds of memory corruption attacks. Certain buffer overflows and other attacks that directly manipulate memory can get around rights restrictions and run as root, admin, or worse. For example, if you exploit a kernel flaw with a buffer overflow (including flaws in device drivers) you are running in Ring 0 and fully trusted, no matter what privilege level the user was running as. If you read through the vulnerability updates on various platforms (Mac, PC, whatever), there are always a bunch of attacks that still work without admin rights. I’m also completely ignoring privilege escalation attacks, but we all know they tend to get patched at a slower pace than remote exploitation vulnerabilities. This isn’t to say that removal of admin rights is completely useless – it’s very useful to keep users from mucking up your desktop images – but from a defensive standpoint, I don’t think restricting user rights is nearly as effective as is often claimed. My advice? Do not rely on standard user mode as a security defense. It’s useful for locking down users, but has only limited effectiveness for stopping attacks. When you evaluate pulling admin rights, don’t think it will suddenly eliminate the need for other standard endpoint security controls. Share:

Share:
Read Post

The NSA Isn’t Evil (Even Working with Google)

The NSA is going to work with Google to help analyze the recent Chinese (probably) hack. Richard Bejtlich predicted this, and I consider it a very positive development. It’s a recognition that our IT infrastructure is a critical national asset, and that the government can play a role in helping respond to incidents and improve security. That’s how it should be – we don’t expect private businesses to defend themselves from amphibious landings (at least in our territory), and the government has political, technical, and legal resources simply not available to the private sector. Despite some of the more creative TV and film portrayals, the NSA isn’t out to implant microchips in your neck and follow you with black helicopters. They are a signals intelligence collection agency, and we pay them to spy on as much international communication as possible to further our national interests. Think that’s evil? Go join Starfleet – it’s the world we live in. Even though there was some abuse during the Bush years, most of that was either ordered by the President, or non-malicious (yes, I’m sure there was some real abuse, but I bet that was pretty uncommon). I’ve met NSA staff and sometimes worked with plenty of three-letter agency types over the years, and they’re just ordinary folk like the rest of us. I hope we’ll see more of this kind of cooperation. Now the one concern is for you foreigners – the role of the NSA is to spy on you, and Google will have to be careful to avoid potentially uncomfortable questions from foreign businesses and governments. But I suspect they’ll be able to manage the scope and keep things under control. The NSA probably pwned them years ago anyway. Good stuff, and I hope we see more direct government involvement… although we really need a separate agency to handle these due to the conflicting missions of the NSA. Note: for those of you that follow these things, there is clear political maneuvering by the NSA here. They want to own cybersecurity, even though it conflicts with their intel mission. I’d prefer to see another agency hold the defensive reins, but until then I’m happy for any .gov cooperation. Share:

Share:
Read Post

Friday Summary: February 5, 2010

I think I need to stop feeling guilty for trying to run a business. Yesterday we announced that we’re trying to put together a list of end users we can run the occasional short survey past. I actually felt guilty that we will derive some business benefit from it, even though we give away a ton of research and advice for free, and the goal of the surveys isn’t to support marketing, but primary research. I’ve been doing this job too long when I don’t even trust myself anymore, and rip apart my own posts to figure out what the angle is. Jeez – it isn’t like I felt guilty about getting paid to work on an ambulance. It is weird to try to build a business where you maintain objectivity while trying to give as much away for free as possible. I think we’re doing a good job of managing vendor sponsorship, thanks to our Totally Transparent Research process, which allows us to release most white papers for free, without any signup or paywall. We’ve had to turn down a fair few projects to stick with our process, but there are plenty of vendors happy to support good research they can use to help sell their products, without having to bias or control the content. We’re walking a strange line between the traditional analyst model, media sponsorship, research department, and… well, whatever else it is we’re doing. Especially once we finish up and release our paid products. Anyway, I should probably get over it and stop over-thinking things. That’s what Rothman keeps telling me, not that I trust him either. Back to that user panel – we’d like to run the occasional (1-2 times per quarter) short (less than 10 minutes) survey to help feed our research, and as part of supporting the OWASP survey program. We will release all the results for free, and we won’t be selling this list or anything. If you are interested, please email us at survey@securosis.com. End users only (for now) please – we do plan on expanding to vendors later. If you are at a vendor and are responsible for internal security, that’s also good. All results will be kept absolutely anonymous. We’re trying to give back and give away as much as we can, and I have decided I don’t need to feel guilty for asking for a little of your time in exchange. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark Reading post on Dealing with Weak Passwords. Rich’s TidBITS article on iPads for the enterprise Rich’s Endpoint DLP article took the cover of Information Security magazine Rich quoted by cnet on the Mac vs. PC security debate At TidBITS, Pepper points out that the dead are staging a comeback in the ebook market. Zombie Authors Threaten Fiction Ebook Market, from the Grave! – Brains, anyone? Favorite Securosis Posts Rich: Adrian’s post on Agile and SDL. Funny timing on this one, with Microsoft starting to release some new information on it. Adrian: Mike’s Monitor Everything. I disagree with some of it, but there is so much good information that it’s my fave this week. David Mortman: Analysis of Trustwave’s 2010 Breach Report More yummy yummy data Mike: What do DLP and condoms have in common? Any time you can mention condoms on a corporate blog, it’s a win. ‘nuf said. Meier: Comments on Microsoft Simplified SDL I was hoping Adrian would do a rundown when I saw this earlier and I enjoyed how he broke it out. Other Securosis Posts The NSA Isn’t Evil (Even Working with Google) Database Security Fundamentals: Access & Authorization Need Brains. User Brains Incite 2/2/2010: The Life of the Party You Have to Buy Data Security Tools Pragmatic Data Security: Discover The Network Forensics (Full Packet Capture) Revival Tour Network Security Fundamentals: Default Deny (UPDATED) Favorite Outside Posts Rich: Jeremiah’s great post on why we need to break the web to secure it. This is one of the biggest problems we face on the web – the refusal to make important changes which would enable us to move forward, for fear of breaking older content. Not that we should break things willy-nilly, but many of the bits we are talking about breaking are easy to work around in terms of still providing users the same browsing experience. It’s the ad networks that are the big problem. Adrian: Krebs on ATM Skimmers, part 1 and 2, as very practical security tips. Mike: Kudos to Will Gragido, who makes a play for the fundamental building block of pragmatic philosophy – Accountability the non-Negotiable Asset. Keep in mind that accountability cuts both ways: you need to be accountable for meeting deliverables and managing expectations, and folks in your organization need to be accountable for not doing stupid things. David Mortman: Excerpts from Randy George’s “Dark Side of DLP” “It’s not just enough to recognize badness; someone has to be able to classify badness, with authority.” Says so much about security and not just DLP. Chris: Twitter: real but malicious BitTorrent trackers harvesting accounts. Who knew Twitter had real security staff? Meier: How secure are you? Access was easy at 9 out of 10 buildings. It’s easy for staff writers at the Orlando Sentinel – it’s easy for anyone. Project Quant Posts Project Quant: DatabaseSecurity – WAF Project Quant: Database Security – Encryption Project Quant: Project Comments Project Quant: Database Security – Protect through Monitoring Project Quant: Database Security – Audit Project Quant: Database Security – Monitoring Project Quant: Database Security – Open Question to Database Security Community Project Quant: Database Security – Shield Top News and Posts House passes cybersecurity bill. This hit right as we were going to press, so we’ll provide analysis later. PGP Acquires TC TrustCenter & Chosen Security. If a PKI falls in the woods, does anyone hear it? David Litchfield hangs up the gloves. David is an exceptional researcher who was a powerful counterbalance to Oracle marketing. Sad to

Share:
Read Post

Analysis of Trustwave’s 2010 Breach Report

Trustwave just released their latest breach (and penetration testing) report, and it’s chock full of metrics goodness. Like the Verizon Data Breach Investigations Report, it’s a summary of information based on their responses to real breaches, with a second section on results from their penetration tests. The breach section is the best part, and I already wrote about one lesson in a quick post on DLP. Here are a few more nuggets that stood out: It took an average of 156 days to detect a breach, and only 9% of victims detected the breach on their own – the rest were discovered by outside groups, including law enforcement and a credit card companies. Chip and PIN was called out for being more resilient against certain kinds of attacks, based on global analysis of breaches. Too bad we won’t get it here in the US anytime soon. One of the biggest sources of breaches was remote access applications for point of sale systems. Usually these are in place for third party support/management, and configured poorly. Memory parsing (scanning memory to pull sensitive information as it’s written to RAM) was a very common attack technique. I find this interesting, since certain versions of memory parsing attacks have virtualization implications… and thus cloud implications. This is something to keep an eye on, and an area I’m researching more. As I mentioned in the post 5 minutes ago, encryption was only used once for exfiltration. Now some suggestions to the SpiderLabs guys: I realize it isn’t politically popular, but it would totally rock if you, Verizon, and other response teams started using a standard base of metrics. You can always add your own stuff on top, but that would really help us perform external analysis across a wider base of data points. If you’re interested, we’d totally be up for playing neutral third party and coordinating and publishing a recommended metrics base. The pen testing section would be a lot more interesting if you released the metrics, as opposed to a “top 10” list of issues found. We don’t know if number 1 was 10x more common than issue 10, or 100x more common. It’s great that we now have another data source, and I consider all these reports mandatory reading, and far more interesting than surveys. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.