Securosis

Research

RSAC 2010 Guide: Data Security

Over the next 3 days, we’ll be posting the content from the Securosis Guide to the RSA Conference 2010. We broke the market into 8 different topics: Network Security, Data Security, Application Security, Endpoint Security, Content (Web & Email) Security, Cloud and Virtualization Security, Security Management, and Compliance. For each section, we provide a little history and what we expect to see at the show. Next up is Data Security. Data Security Although technically nearly all of Information Security is directed at protecting corporate data and content, in practice our industry has historically focused on network and endpoint security. At Securosis we divide up the data security world into two major domains based on how users access data – the data center and the desktop. This reflects how data is managed far more practically than “structured” and “unstructured”. The data center includes access through enterprise applications, databases, and document management systems. The desktop includes productivity applications (the Office suite), email, and other desktop applications and communications. What We Expect to See There are four areas of interest at the show relative to data security: Content Analysis: This is the ability of security tools to dig inside files and packets to understand the content inside, not just the headers or other metadata. The most basic versions are generally derived from pattern matching (regular expressions), while advanced options include partial document matching and database fingerprinting. Content analysis techniques were pioneered by Data Loss Prevention (DLP) tools; and are starting to pop up in everything from firewalls, to portable device control agents, to SIEM systems. The most important questions to ask identify the kind of content analysis being performed. Regular expressions alone can work, but result in more false positives and negatives than other options. Also find out if the feature can peer inside different file types, or only analyze plain text. Depending on your requirements, you may not need advanced techniques, but you do need to understand exactly what you’re getting and determine if it will really help you protect your data, or just generate thousands of alerts every time someone buys a collectable shot glass from Amazon. DLP Everywhere: Here at Securosis we use a narrow definition for DLP that includes solutions designed to protect data with advanced content analysis capabilities and dedicated workflow, but not every vendor marketing department agrees with our approach. Given the customer interest around DLP, we expect you’ll see a wide variety of security tools with DLP or “data protection” features, most of which are either basic content analysis or some form of context-based file or access blocking. These DLP features can be useful, especially in smaller organizations and those with only limited data protection needs, but they are a pale substitute if you need a dedicated data protection solution. When talking with these vendors, start by digging into their content analysis capabilities and how they really work from a technical standpoint. If you get a technobabble response, just move on. Also ask to see a demo of the management interface – if you expect a lot of data-related violations, you will likely need a dedicated workflow to manage incidents, so user experience is key. Finally, ask them about directory integration – when it comes to data security, different rules apply to different users and groups. Encryption and Tokenization: Thanks to a combination of PCI requirements and recent data breaches, we are seeing a ton of interest in application and database encryption and tokenization. Tokenization replaces credit card numbers or other sensitive strings with random token values (which may match the credit card format) matched to real numbers only in a central highly secure database. Format Preserving Encryption encrypts the numbers so you can recover them in place, but the encrypted values share the credit card number format. Finally, newer application and database encryption options focus on improved ease of use and deployment compared to their predecessors. You don’t really need to worry about encryption algorithms, but it’s important to understand platform support, management user experience (play around with the user interface), and deployment requirements. No matter what anyone tells you, there are always requirements for application and database changes, but some of these approaches can minimize the pain. Ask how long an average deployment takes for an organization of your size, and make sure they can provide real examples or references in your business, since data security is very industry specific. Database Security: Due partially to acquisitions and partially to customer demand, we are seeing a variety of tools add features to tie into database security. Latest in the hit parade are SIEM tools capable of monitoring database transactions and vulnerability assessment tools with database support. These parallel the dedicated Database Activity Monitoring and Database Assessment markets. As with any area of overlap and consolidation, you’ll need to figure out if you need a dedicated tool, or if features in another type of product are good enough. We also expect to see a lot more talk about data masking, which is the conversion of production data into a pseudo-random but still usable format for development. Share:

Share:
Read Post

RSAC 2010 Guide: Network Security

Over the next 3 days, we’ll be posting the content from the Securosis Guide to the RSA Conference 2010. We broke the market into 8 different topics: Network Security, Data Security, Application Security, Endpoint Security, Content (Web & Email) Security, Cloud and Virtualization Security, Security Management, and Compliance. For each section, we provide a little history and what we expect to see at the show. First up is Network Security. Network Security Since we’ve been connecting to the Internet people have been focused on network security, so the sector has gotten reasonably mature. As a result, there has been a distinct lack of innovation over the past few years. There have certainly been hype cycles (NAC, anyone?), but most organizations still focus on the basics of perimeter defense. That means intrusion prevention (IPS) and reducing complexity by collapsing a number of functions into an integrated Unified Threat Management (UTM) device. What We Expect to See There are four areas of interest at the show for network security: Application Awareness: This is the ability of devices to decode and protect against application layer attacks. Since most web applications are encapsulated in HTTP (port 80) or HTTPS (port 443) traffic, to really understand what’s happening it’s important for network devices to dig into each packet and understand what the application is doing. This capability is called deep packet inspection (DPI), and most perimeter devices claim to provide it, making for a confusing environment with tons of unsubstantiated vendor claims. The devil is in the details of how each vendor implements DPI, so focus on which protocols they understand and what kinds of policies and reporting are available on a per-protocol basis. Speeds and Feeds: As with most mature markets, especially on the network, at some point it gets down to who has the biggest and fastest box. Doing this kind of packet decodes and attack signature matching requires a lot of horsepower, and we are seeing 20gbps IPS devices appear. You will also see blade architectures on integrated perimeter boxes, and other features focused on adding scale to the environment as customer networks continue to go faster. Since every organization has different requirements, spend some time ahead of the show on understanding what you need and how you’d like to architect your network security environment. Get it down on a single piece of paper and head down to the show floor. When you get to the vendor booth, find an SE (don’t waste time with a sales person) and have them show you how their product(s) can meet your requirements. They’ll probably want to show you their fancy interface and some other meaningless crap. Stay focused on your issues and don’t leave until you understand in your gut whether the vendor can get the job done. Consolidation and Integration: After years of adding specific boxes to solve narrow problems, many organizations’ perimeter networks are messes. Thus the idea of consolidating both boxes (with bigger boxes) and functions (with multi-function devices) continues to be interesting. There will be lots of companies on the show floor talking about their UTM devices, targeting small companies and large with similar equipment. Of course, the needs of the enterprise fundamentally differ from small business requirements, so challenge how well suited any product is for your environment. That means breaking out your one-page architecture again, and having the SEs on the show floor show you how their integrated solutions can solve your problems. Also challenge them on their architecture, given that the more a box needs to do (firewall, IPS, protocol decode, content security, etc.) the lower its throughput. Give vendor responses the sniff test and invite those who pass in for a proof of concept. Forensics: With the understanding that we cannot detect some classes of attacks in advance, forensics and full packet capture gear will be high profile at this year’s conference. This actually represents progress, although you will see a number of vendors talking about blocking APT-like attackers. The reality is (as we’ve been saying for a long time under the React Faster doctrine) that you can’t stop the attacks (not all of them, anyway), so you had better figure out sooner rather than later that you have been compromised, and then act accordingly. The key issues around forensics are user experience, chain of custody, and scale. Most of today’s networks generate a huge amount of data, and you’ll have to figure out how to make that data usable, especially given the time constraints inherent to incident response. You also need to get comfortable with evidence gathering and data integrity, since it’s easy to say the data will hold up in court, but much harder to make it do so. And for those of you who cannot stand the suspense, you can download the entire guide (PDF). Share:

Share:
Read Post

RSVP for the Securosis and Threatpost Disaster Recovery Breakfast

We quite enjoy all the free evening booze at the RSA conference, but most days what we’d really like is just a nice, quiet breakfast. Seriously, what’s with throwing massive parties for people to network, then blasting the music so loud that all we can do is stand around and stare at the mostly-all-dude crowd? In response, last year we started up the Disaster Recovery Breakfast, and it went over pretty well. It’s a nice quiet breakfast with plenty of food, coffee, recovery items (aspirin & Tums), and even the hair of the dog for those of you not quite ready to sober up. No marketing, no presentations, no sales types trolling for your card. Sit where you want, drop in and out as much as you want, and if you’re really a traditionalist, blast your iPod and stand in a corner staring at us while nursing a Bloody Mary. This year we will be holding it Thursday morning at Jillian’s in the Metreon from 8-11. It’s an open door during that window, and feel free to stop by at any time and stay as long as you want. We’re even cool if you drive through just to mooch some quick coffee. Please RSVP by dropping us a line at rsvp@securosis.com, and we’ll see you there! Share:

Share:
Read Post

RSAC 2010 Guide: Application Security

Over the next 3 days, we’ll be posting the content from the Securosis Guide to the RSA Conference 2010. We broke the market into 8 different topics: Network Security, Data Security, Application Security, Endpoint Security, Content (Web & Email) Security, Cloud and Virtualization Security, Security Management, and Compliance. For each section, we provide a little history and what we expect to see at the show. Next up is Data Security. Data Security Although technically nearly all of Information Security is directed at protecting corporate data and content, in practice our industry has historically focused on network and endpoint security. At Securosis we divide up the data security world into two major domains based on how users access data – the data center and the desktop. This reflects how data is managed far more practically than “structured” and “unstructured”. The data center includes access through enterprise applications, databases, and document management systems. The desktop includes productivity applications (the Office suite), email, and other desktop applications and communications. What We Expect to See There are four areas of interest at the show relative to data security: Content Analysis: This is the ability of security tools to dig inside files and packets to understand the content inside, not just the headers or other metadata. The most basic versions are generally derived from pattern matching (regular expressions), while advanced options include partial document matching and database fingerprinting. Content analysis techniques were pioneered by Data Loss Prevention (DLP) tools; and are starting to pop up in everything from firewalls, to portable device control agents, to SIEM systems. The most important questions to ask identify the kind of content analysis being performed. Regular expressions alone can work, but result in more false positives and negatives than other options. Also find out if the feature can peer inside different file types, or only analyze plain text. Depending on your requirements, you may not need advanced techniques, but you do need to understand exactly what you’re getting and determine if it will really help you protect your data, or just generate thousands of alerts every time someone buys a collectable shot glass from Amazon. DLP Everywhere: Here at Securosis we use a narrow definition for DLP that includes solutions designed to protect data with advanced content analysis capabilities and dedicated workflow, but not every vendor marketing department agrees with our approach. Given the customer interest around DLP, we expect you’ll see a wide variety of security tools with DLP or “data protection” features, most of which are either basic content analysis or some form of context-based file or access blocking. These DLP features can be useful, especially in smaller organizations and those with only limited data protection needs, but they are a pale substitute if you need a dedicated data protection solution. When talking with these vendors, start by digging into their content analysis capabilities and how they really work from a technical standpoint. If you get a technobabble response, just move on. Also ask to see a demo of the management interface – if you expect a lot of data-related violations, you will likely need a dedicated workflow to manage incidents, so user experience is key. Finally, ask them about directory integration – when it comes to data security, different rules apply to different users and groups. Encryption and Tokenization: Thanks to a combination of PCI requirements and recent data breaches, we are seeing a ton of interest in application and database encryption and tokenization. Tokenization replaces credit card numbers or other sensitive strings with random token values (which may match the credit card format) matched to real numbers only in a central highly secure database. Format Preserving Encryption encrypts the numbers so you can recover them in place, but the encrypted values share the credit card number format. Finally, newer application and database encryption options focus on improved ease of use and deployment compared to their predecessors. You don’t really need to worry about encryption algorithms, but it’s important to understand platform support, management user experience (play around with the user interface), and deployment requirements. No matter what anyone tells you, there are always requirements for application and database changes, but some of these approaches can minimize the pain. Ask how long an average deployment takes for an organization of your size, and make sure they can provide real examples or references in your business, since data security is very industry specific. Database Security: Due partially to acquisitions and partially to customer demand, we are seeing a variety of tools add features to tie into database security. Latest in the hit parade are SIEM tools capable of monitoring database transactions and vulnerability assessment tools with database support. These parallel the dedicated Database Activity Monitoring and Database Assessment markets. As with any area of overlap and consolidation, you’ll need to figure out if you need a dedicated tool, or if features in another type of product are good enough. We also expect to see a lot more talk about data masking, which is the conversion of production data into a pseudo-random but still usable format for development. Share:

Share:
Read Post

Incite 2/23/10: Flexibility

It is said that unhappiness results from either not getting what you want, or getting what you don’t want. I’m pretty sure strep throat qualifies as something you don’t want, and it certainly is causing some unhappiness in Chez Rothman. Yesterday, I picked up 4 different antibiotics for everyone in the house except me, which must qualify me for some kind of award at the Publix pharmacy. I like to think of myself as a reasonably flexible person who can go with the flow – but in reality, not so much. I don’t necessarily have a set schedule, but I know what I need to get done during the day and roughly when I want to work on certain things. But when the entire family is sick, you need to improvise a bit. Unfortunately that is hard for a lot of people, including me. So when the best laid plans of sitting down and cranking out content were subverted by a high maintenance 6 year old – who wanted to converse about all sorts of things and wanted me to listen – I needed to engage my patience bone. Oh yeah, I don’t have a patience bone. I don’t even have a patience toenail. So I got a bit grumpy, snarled a bit, and was generally an ass. The Boss was good in pointing out I’m under a lot of stress heading into a big conference and to give me a wide berth, but that’s a load of crap. I had my priorities all screwed up. I needed to take a step back and view this as a positive and figure this is another great opportunity to work on my patience and show the flexibility that I claim to have. So I chat with my girl when she’s done watching Willy Wonka, and I go out to the pharmacy and get the medicine. Here is the deal – crap is going to happen. You’ll get sick at the most inopportune time. Or your dog will. Or maybe it’s your kid. Or your toilet will blow up or your washing machine craps out. It’s always something. And there are two ways to deal with it. You can get pissy (like I did this morning), which doesn’t really do anything except make a bad situation worse. My other option was to realize that I’m lucky to have a flexible work environment and a set of partners who can (and do) cover for me. Yes, the latter is the right answer. So I cover at home when I need to and soon enough I’ll be back to my regular routine and that will be good too. Um, I’m not sure who wrote this post, but I kind of like him. – Mike Photo credit: “Be Flexible” originally uploaded by Chambo25 Incite 4 U I’d like say it’s the calm before the storm, but given that 4 out of the 5 people I live with are sick, there’s no calm on the home front, and there is always the last minute prep work involved in getting ready for the RSA Conference that makes the week before somewhat frantic. And that’s a good description of this week thus far. If you are heading out to San Francisco, check out our Securosis Guide to the RSA Conference 2010 (PDF), or the bite-size chunks as we post them on the blog this week. That should help you get a feel for the major themes and what to look for at the show. Finally, make sure to RSVP for the Disaster Recovery Breakfast we are hosting on Thursday morning with the fine folks of Threatpost. Without exploits, what’t the point? – Andy the IT Guy wrote a piece about whether pen tests require the use of exploits. He cites some PCI chapter and verse, coming to the conclusion that exploits are not required for the pen testing requirement of PCI. Whether it is or is not required is up to your assessor, but that misses the point. Yes, exploits can be dangerous and they can knock stuff down. But pen testing using real exploits is the closest you are going to get to a real world scenario. That old adage that any battle plan doesn’t survive contact with the enemy – it’s true. So your vulnerability scanner will tell you what’s vulnerable, not what can be exploited, and I can assure you the bad guys don’t just stop once they’ve knocked on your door with Nessus. – MR IE6 + Adobe = Profit! – An article by Brian Krebs on a new experimental tool to prevent drive-by malware on Windows got me thinking. Blade (BLock All Drive-by Exploits) doesn’t stop the exploit, but supposedly eliminates the ability to install a download without user approval. Assuming it works as advertised, it could be useful, although it won’t stop horny users from installing malware in attempts to view videos of nekked folks. But the interesting part is the statistics from their testing – over 40% of attacks are against IE 6, with a whopping 67% of drive by attacks targeting Adobe Reader or Flash. If those numbers don’t give you at least a little juice with management to update your applications and get off IE6, or to prioritize Adobe patches, perhaps it’s time to polish the resume. – RM Socially Inept – Security Barbie had a good post on the Rapid 7 incident in “My ode to Rapid7” where a few sales people Twitter & LinkedIn spammed the bejesus out of the entire security community. Or at least the echo chamber of folks most likely to bitch about it. “Fine, fine. I’m gonna take them off my list of successful people today.” I am not poking fun at Rapid7, but there are strange boundaries of what is appropriate and inappropriate behavior on venues like Twitter. It’s fine to ask my friends what they think of a product or company, but not OK for people I don’t know from that company to offer an opinion. Every corporation out there has a PR and media strategy for

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.