Securosis

Research

Monitoring up the Stack: DAM, Part 1

Database Activity Monitoring (DAM) is a form of application monitoring by looking at the database specific transactions, and integration of DAM data into SIEM and Log Management platforms is becoming more prevalent. Regular readers of this blog know that we have covered this topic many times, and gone into gory technical detail in order to help differentiate between products. If you need that level of detail, I’ll refer you to the database security page in the Securosis Research Library. Here I will give the “cliff notes” version, describing what the technology is and some of the problems it solves. The idea is to explain how DAM augments SIEM and Log Management analysis, and outfit end users with an understanding of how DAM extends the analysis capabilities of your monitoring strategy. So what is Database Activity Monitoring? It’s a system that captures and records database events – which at a minimum is all Structured Query Language (SQL) activity, in real-time or near-real-time, including database administrator activity, across multiple database platforms, and generating alerts on policy violations. That’s Rich’s definition from four years ago, and it still captures the essence. For those of you already familiar with SIEM, DAM is very similar in many ways. Both follow a similar process of collecting, aggregating, and analyzing data. Both provide alerts and reports, and integrate into workflow systems to leverage the analysis. Both collect different data types, in different formats, from heterogenous systems. And both rely on correlation (and in some cases enrichment) to perform advanced analytics. How are they different? The simple answer is that they collect different events and perform different analyses. But there is another significant difference, which I stressed within this series’ introductory post: context. Database Activity Monitoring is tightly focused on database activity and how applications use the database (for good and not so good purposes). With specific knowledge of appropriate database use and operations and a complete picture of database events, DAM is able to analyze database statements with far greater effectiveness. In a nutshell, DAM provides focused monitoring of one single important resource in the application chain, while SIEM provides great breadth of analysis across all devices. Why is this important? SQL injection protection: Database activity monitoring can filter and protect against many SQL injection variants. It cannot provide complete prevention, but statement and behavioral analysis techniques catch many known and unknown database attacks. By white listing specific queries from specific applications, DAM can detect tampered and other malicious queries, as well as queries from unapproved applications (which usually doesn’t bode well). And DAM can transcend monitoring and actually block a SQL injection before the statement arrives at the database. Behavioral monitoring: DAM systems capture and record activity profiles, both of generic user accounts, as well as, specific database users. Changes in a specific user’s behavior might indicate disgruntled employees, hijacked accounts, or even oversubscribed permissions. Compliance purposes: Given DAM’s complete view of database activity, and ability to enforce policies on both a statement and transaction/session basis, it’s a proven source to substantiate controls for regulatory requirements like Sarbanes-Oxley. DAM can verify the controls are both in place and effective. Content monitoring: A couple of the DAM offerings additionally inspect content, so they are able to detect both SQL injection — as mentioned above – and also content injection. It’s common for attackers to abuse social networking and file/photo sharing sites to store malware. When ‘friends’ view images or files, their machines become infected. By analyzing the ‘blob’ of content prior to storage, DAM can prevent some ‘drive-by’ injection attacks. That should provide enough of an overview to start to think about if/how you should think about adding DAM to your monitoring strategy. In order to get there, next we’ll dig into the data sources and analysis techniques used by DAM solutions, so you can determine whether the technology would enhance your ability to detect threats, while increasing leverage. Share:

Share:
Read Post

Friday Summary: September 24, 2010

We are wrapping up a pretty difficult summer here at Securosis. You have probably noticed from the blog volume as we have been swamped with research projects. Rich, Mike, and I have barely spoken with one another over the last couple months as we are head-down and researching and writing as fast as we can. No time for movies, parties, or vacation travel. These Quant projects we have been working on make us feel like we have been buried in sand. I have been this busy several times during my career, but I can’t say I have ever been busier. I don’t think that would be possible, as there are not enough hours in the day! Mike’s been hiding at undisclosed coffee shops to the point his family had his face put on a milk carton. Rich has taken multitasking to a new level by blogging in the shower with his iPad. Me? I hope to see the shower before the end of the month. I must say, despite the workload, projects like Tokenization and PCI Encryption have been fun. There is light at the end of the proverbial tunnel, and we will even start taking briefings again in a couple weeks. But what really keeps me going is having work to do. If I even think about complaining about the work level, something in the back of my brain reminds me that it is very good to be busy. It beats the alternative. By the time this post goes live I will be taking part of the day off from working to help friends load all their personal belongings into a truck. After 26 years with the same employer, one of my friends here in Phoenix was laid off. He and his wife, like many of the people I know in Arizona, are losing their home. 22 years of accumulated stuff to pack … whatever is left from the various garage sales and give-aways. This will be the second friend I have helped move in the last year, and I expect it will happen a couple more times before this economic depression ends. But as depressing as that may sound, after 14 months of haggling with the bank, I think they are just relieved to be done with it and move on. They now have a sense of relief from the pressure and in some ways are looking forward to the next phase of their life. And the possibility of employment. Spirits are high enough that we’ll actually throw a little party and celebrate what’s to come. Here’s to being busy! On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Seven Features To Look For In Database Assessment Tools. Mike’s presentation on Endpoint Security Fundamentals. Adrian’s Dark Reading post: Protegrity Gets Aggressive. Adrian quoted in TechTarget. And I’ll probably catch hell for this. Favorite Securosis Posts Rich: Monitoring up the Stack: Threats. Knowing what to monitor, and how to pull the value from it, is a heck of a lot tougher than merely collecting data. Mike and Adrian are digging in and showing us how to focus. Mike Rothman: Monitoring up the Stack: Threats. This blog series is getting going and it’s going to be cool. Getting visibility beyond just the network/systems is critical. David Mortman: Monitoring up the Stack: Threats. Adrian Lane: FireStarter: It’s Time to Talk about APT. Other Securosis Posts Government Pipe Dreams. NSO Quant: Clarifying Metrics (and some more links). Monitoring up the Stack: File Integrity Monitoring. Incite 9/22/2010: The Place That Time Forgot. New Paper (+ Webcast): Understanding and Selecting a Tokenization Solution. NSO Quant: Manage Process Metrics, Part 1. Understanding and Selecting an Enterprise Firewall: Selection Process. Upcoming Webinar: Selecting SIEM. Favorite Outside Posts Rich: 2010 Website Security Statistics Report. Once again, Jeremiah provides some absolutely amazing numbers on the state of Web site security. He pulled together stats from over 2000 web sites across 350 organizations to provide us all some excellent benchmarks for things like numbers and types of vulnerabilities (by vertical) and time to remediate. Truly excellent, and non-biased, work. Mike Rothman: Do you actually care about privacy?. Lots of us say we do. Seth Godin figures we are more worried about being surprised. It makes you think. Chris Pepper: evercookie: doggedly persistent cookies. By the guy who XSSed MySpace! David Mortman: Cyber Weapons. Adrian Lane: Titanic Secret Revealed. A serious case of focusing on the wrong threat! Chris Pepper: Little Bobby Tables moves to Sweden. Project Quant Posts NSO Quant: Manage Metrics–Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics–Deploy and Audit/Validate. NSO Quant: Manage Metrics–Process Change Request and Test/Approve. Research Reports and Presentations Understanding and Selecting a Tokenization Solution. Security + Agile = FAIL Presentation. Data Encryption 101: A Pragmatic Approach to PCI. White Paper: Understanding and Selecting SIEM/Log Management. White Paper: Endpoint Security Fundamentals. Understanding and Selecting a Database Encryption or Tokenization Solution. Low Hanging Fruit: Quick Wins with Data Loss Prevention. Top News and Posts Twitter Worm Outbreak. The most interesting security event of the week. New Security Microchip Vuln. Mac OS (iOS and OSX) Security Updates. New Autofill Hack Variant. VMWare Security Hardening Guide (PDF). evercookie. Many of you probably saw the re-tweet stream this week. Yes, this looks nasty and a pain in the ass to remove. Maybe I need to move all my browsing to temporary partitions. More Conjecture on Stuxnet Malware and some alternate opinions. And some funny quotes on Schneier’s blog. My relentless pursuit of the guy who robbed me. Cranky amateur cyber-sleuth FTW! DRG SSH Username and Password Authentication Tag Clouds. Nice rendering of human nature (you can call it laziness or stupidity, as you prefer). Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to ds, in response to FireStarter: It’s Time to Talk about APT . I think you are oversimplifying the situation regarding te reaons for classifying information. It is well known that information has value, and sometimes that value diminishes if others are aware you know it. Consider the historical case of the Japanese codes in WWII. If the

Share:
Read Post

Monitoring up the Stack: File Integrity Monitoring

We kick off our discussion of additional monitoring technologies with a high-level overview of file integrity monitoring. As the name implies, file integrity monitoring detects changes to files – whether text, configuration data, programs, code libraries, critical system files, or even Windows registries. Files are a common medium for delivering viruses and malware, and detecting changes to key files can provide an indication of machine compromise. File integrity monitoring works by analyzing changes to individual files. Any time a file is changed, added, or deleted, it’s compared against a set of policies that govern file use, as well as signatures that indicate intrusion. Policies are as simple as a list of operations on a specific file that are not allowed, or could include more specific comparisons of the contents and the user who made the change. When a policy is violated an alert is generated. Changes are detected by examining file attributes: specifically name, date of creation, time last modified, ownership, byte count, a hash to detect tampering, permissions, and type. Most file integrity monitors can also ‘diff’ the contents of the file, comparing before and after contents to identify exactly what changed (for text-based files, anyway). All these comparisons are against a stored reference set of attributes that designates what state the file should be in. Optionally the file contents can be stored for comparison, and what to do in case a change is detected as a baseline. File integrity monitoring can be periodic – at intervals from minutes to every few days. Some solutions offer real-time threat detection that performs the inspection as the files are accessed. The monitoring can be performed remotely – accessing the system with user credentials and running instructing the operating system to periodically collect relevant information – or an agent can be installed on the target system that performs the data collection locally, and returns data upstream to the monitoring server. As you can imagine, even a small company changes files a lot, so there is a lot to look at. And there are lots of files on lots of machines – as in tens of thousands. Vendors of integrity monitoring products provide the basic list of critical files and policies, but you need to configure the monitoring service to protect the rest of your environment. Keep in mind that some attacks are not fully defined by a policy, and verification/investigation of suspicious activity must be performed manually. Administrators need to balance performance against coverage, and policy precision against adaptability. Specify too many policies and track too many files, and the monitoring software consumes tremendous resources. File modification policies designed for maximum coverage generate many ‘false-positive’ alerts that must be manually reviewed. Rules must balance between catching specific attacks and detecting broader classes of threats. These challenges are mitigated in several ways. First, monitoring is limited to just those files that contain sensitive information or are critical to the operation of the system or application. Second, the policies have different criticality, so that changes to key infrastructure or matches against known attack signatures get the highest priority. The vendor supplies rules for known threats and to cover compliance mandates such as PCI-DSS. Suspicious events that indicate an attack policy violation are the next priority. Finally, permitted changes to critical files are logged for manual review at a lower priority to help reduce the administrative burden. File integrity monitoring has been around since the mid-90s, and has proven very effective for detection of malware and system compromise. Changes to Windows registry files and open source libraries are common hacks, and very difficult to detect manually. While file monitoring does not help with many of the web and browser attacks that use injection or alter programs in memory, it does detect many types of persistant threats, and therefore is a very logical extension of existing monitoring infrastructure. Share:

Share:
Read Post

Monitoring up the Stack: Threats

In our introductory post we discussed how customers are looking to derive additional value form their SIEM and log management investments by looking at additional data types to climb the stack. Part of the dissatisfaction we hear from customers is the challenge of turning collected data into actionable information for operational efficiency and compliance requirements. This challenge is compounded by the clear focus on application-oriented attacks. For the most part, our detection only pays attention to the network and servers, while the attackers are flying above that. It’s kind of like repeatedly missing the bad guys because they are flying at 45,000 feet, but you cannot get above 20,000 feet. You aren’t looking where the attacks are actually happening, which obviously presents problems. At its core SIEM can fly at 45,000’ and monitor application components looking for attacks, but it will take work to get there. Though given the evolution of the attack space, we don’t believe keeping monitoring focused on infrastructure is an option, even over the middle term. What kind of application threats are we talking about? It’s not brain surgery and you’ve seen all of these examples before, but they warrant another mention because we continue to miss opportunities to focus on detecting these attacks. For example: Email: You click a link in a ‘joke-of-the-day’ email your spouse forwarded, which installs malware on your system, and then tries to infect every machine on your corporate network. A number of devices get compromised and become latent zombies waiting to blast your network and others. Databases: Your database vendor offers a new data replication feature to address failover requirements for your financial applications, but it’s installed with public credentials. Any hacker can now replicate your database, without logging in, just by issuing a database command. Total awesomeness! Web Browsers: Your marketing team launches a new campaign, but the third party content provider site was hacked. As your customers visit your site, they are unknowingly attacked using cross-site request forgery and then download malware. The customer’s credentials and browsing history leak to Eastern Europe, and fraudulent transactions get submitted from customer machines without their knowledge. Yes, that’s a happy day for your customers and also for you, since you cannot just blame the third party content provider. It’s your problem. Web Applications: Your web application development team, in a hurry to launch a new feature on time, fails to validate some incoming parameters. Hackers exploit the database through a common SQL injection vulnerability to add new administrative users, copy sensitive data, and alter database configuration – all through normal SQL queries. By the way, as simple as this attack is, a typical SIEM won’t catch it because all the requests look normal and are authorized. It’s an application failure that causes security failure. Ad-hoc applications: The video game your kid installed on your laptop has a keystroke logger that records your activity and periodically sends an encrypted copy to the hackers who bought the exploit. They replay your last session, logging into your corporate VPN remotely to extract files and data under your credentials. So it’s fun when the corporate investigators show up in your office to ask why you sent the formula for your company’s most important product to China. The power of distributed multi-app systems to deliver services quickly and inexpensively cannot be denied, which means we security folks will not be able to stop the trend – no matter what the risk. But we do have both a capability and responsibility to ensure these services are delivered as securely as possible, and we watch for bad behavior. Many of the events we discussed are not logged by traditional network security tools, and to casual inspection the transactions look legitimate. Logic flaws, architectural flaws, and misused privileges look like normal operation to a router or an IPS. Browser exploits and SQL injection are difficult to detect without understanding the application functionality. More problematic is that damage from these exploits occurs quickly, requiring a shift from after-the-fact forensic analysis to real-time monitoring to give you a chance to interrupt the attack. Yes, we’re really reiterating that application threats are likely to get “under the radar” and past network-level tools. Customers complain the SIEM techniques they have are too slow to keep up with remote multi-stage attacks, code substitution, etc.; ill-suited to stopping SQL injection, rogue applications, data leakage, etc.; or simply effective against cross-site scripting, hijacked privileges, etc. – we keep hearing that current tools to have no chance against these new attacks. We believe the answer involves broader monitoring capabilities at the application layer, and related technologies. But reality dictates the tools and techniques used for application monitoring do not always fit SIEM architectures. Unfortunately this means some of the existing technologies you may have, and more importantly the way you’ve deployed them – may not fit into this new reality. We believe all organizations need to continue broadening how they monitor their IT resources and incorporate technologies that are designed to look at the application layer, providing detection of application attacks in near real time. But to be clear, adoption is still very early and the tools are largely immature. The following is an an overview of the technologies designed to monitor at the application layer, and these are what we will focus on in this series: File Integrity Monitoring: This is real-time verification of applications, libraries, and patches on a given platform. It’s designed to detect replacement of files and executables, code injection, and the introduction of new and unapproved applications. Identity Monitoring: Designed to identify users and user activity across multiple applications, or when using generic group or service accounts. Employs a combination of location, credential, activity, and data comparisons to ‘de-anonymize’ user identity. Database Monitoring: Designed to detect abnormal operation, statements, or user behavior; including both end users and database administrators. Monitoring systems review database activity for SQL injection, code injection, escalation of privilege, data theft, account hijacking, and misuse. Application Monitoring: Protects applications, web applications, and web-based clients from man-in-the-middle attacks, cross site scripting (XSS), cross site request forgery (CSRF), SQL

Share:
Read Post

Upcoming Webinar: Selecting SIEM

Tuesday, September 21st, at 11am PST / 2pm EST, I will be presenting a webinar: “Keys to Selecting SIEM and Log Management”, hosted by NitroSecurity. I’ll cover the basics of SIEM, including data collection and deployment, then dig into use cases, enrichment, data management, forensics, and advanced features. You can sign up for the webinar here. SIEM and Log Management platforms have been around for a while, so I am not going to spend much time on background, but instead steer more towards current trends and issues. If I gloss over any areas you are especially interested in, we will have 15 minutes for Q&A. You can send questions in ahead of time to info ‘at’ securosis dot com, and I will try to address them within the slides. Or you can submit a question in the WebEx chat facility during the presentation, and the host will help discuss. Share:

Share:
Read Post

Monitoring up the Stack: Introduction

The question that came up over and over again during our SIEM research project: “How do I derive more value from my SIEM installation?” As we discussed throughout that report, plenty of data gets collected, but extracting actionable information remains a challenge. In part this is due to the “drinking from the fire-hose” effect, where the speed and volume of incoming data make it difficult to process effectively. Additionally, data needs to be pieced together with sufficient reference points from multiple event sources before analysis. But we found a major limiting factor was also the network-centric perspective on data collection and analysis. We were looking at traffic, rather than transactions. We were looking at packet density, not services. We were looking at IP addresses instead of user identity. We didn’t have context to draw conclusions. We continue pushing our research agenda forward in the areas of application and user monitoring, as this has practical value in performing more advanced analysis. So we will dig into these topics and trends in our new series “Monitoring up the Stack: Reacting Faster to Emerging Attacks”. Compliance and operations management are important drivers for investment in SIEM, Log Management, and other complimentary monitoring investments. SIEM has the capacity to provide continuous monitoring, but most are just not set up to provide timely threat response to application attacks. To support more advanced policies and controls, we need to peel back the veil of network-oriented analysis to look at applications and business transactions. In some cases, this just means a new way of looking at existing data. But that would be too easy, wouldn’t it? To monitor up the stack effectively we need to look at changes in architecture, policy management, data collection, and analysis. Business process analytics and fraud detection require different policies, some additional data, and additional analysis techniques beyond what is commonly found in SIEM. If we want to make sense of business use of IT systems, we need to move up the stack, into the application layer. What’s different about monitoring at the application layer? Application awareness and context. To highlight the differences in why network and security event monitoring are inherently limiting for some use cases, consider that devices and operating systems are outside business processes. In some cases they lack the information needed to perform analysis, but more commonly the policies and analysis engines are just not set up to detect fraud, spoofing, repudiation, and injection attacks. From the application perspective, network identity and user identity are extremely different. Analysis, performed in context of the application, provides contextual data unavailable from just network and device data. It also provides an understanding of transactions, which is much more useful and informative than pure events. Finally, the challenges of deploying a solution for real-time analysis of events are almost the opposite of those needed for efficient management and correlation. Evolving threats target data and application functions, and we need that perspective to understand and keep up with threats. Ultimately we want to provide business analysis and operations management support when parsing event streams, which are the areas SIEM platforms struggle with. And for compliance we want to implement controls and verify both effectiveness and appropriateness. To accomplish these we must employ additional tactics for baselining behavior, advanced forms of data analysis, policy management and – perhaps most importantly – having a better understanding of user identity and authorization. Sure, for security and network forensics, SIEM does a good job of piecing together related events across a network. Both methods detect attacks, and both help with forensic analysis. But monitoring up the stack is far better for detecting misuse and more subtle forms of data theft. And depending upon how it’s deployed in your environment, it can block activity as well as report problems. In our next post we’ll dig into the threats that drive monitoring, and how application monitoring is geared for certain attack vectors. Share:

Share:
Read Post

FireStarter: Automating Secure Software Development

I just got back from the AppSec 2010 OWASP conference in Irvine, California. As you might imagine, it was all about web application security. We security practitioners and coders generally agree that we need to “bake security in” to the development process. Rather than tacking security onto a product like a band-aid after the fact, we actually attempt to deliver code that is secure from the get-go. We are still figuring out how to do this effectively and efficiently, but it seems to me a very good idea. One of the OWASP keynote presentations was at odds with the basic premise held by most of the participants. The idea presented was (I am paraphrasing) that coders suck at secure code development. Further, they will continue to suck at it, in perpetuity. So let’s take security out of the application developers’ hands entirely and build it in with compilers and pre-compilers that take care of bad code automatically. That way they can continue to be ignorant, and we’ll fix it for them! Oddly, I agree with two of the basic premisses: coders for the most part suck today at coding securely, and a couple common web application exploits can be addressed with this technique. Technology, including real and conceptual implementations, can deal with a wide variety of spoofing and injection attacks. Other than that, I think this idea is completely crazy. Coders are mostly ignorant of security today, but that’s changing. There are some vendors looking to productize some secure coding automation tactics because there are practical applications that are effective. But these are limited to correcting simple coding errors, and work because machines can easily recognize some patterns humans tend to overlook. Thinking that automating software security into a product through certifications and format checking programs is not just science fiction, it’s fantasy. I’ll give you one guess on who I’ll bet hasn’t written much code in her career. Oh crap, did I give it away? On the other hand, I have built code that was perfect. Until it was hacked. Yeah, the code was exactly to specification, and performed flawlessly. In fact it performed too flawlessly, and was subject to a timing attack that leaked enough information that the output was guessed. No compiler in the world would have picked this subtle issue up, but an attacker watching the behavior of an application will spot it quickly. And they did. My bad. I am all for automating as much security as we can into the development process, especially as a check on developer activities. Nothing wrong with that – we do it today. But to think that we can automate security and remove it from the hands of developers is naive to the point of being surreal. Timing attacks, logic attacks, and architectural flaws do not show up to a compiler or any form of pre/post automated checks. There has been substantial research on how to validate state machine behavior to detect business transaction fraud, but there has never been a practical application: it’s more work to establish the rules than to simply have someone manually verify the process. It doesn’t work, and it won’t work. People are crafty. Ingenious. Devious. They don’t play by the rules. Compilers and processors do. That’s certainly my opinion. I’m sure some entrepreneur just slit his/her wrists. Oh, well. Okay, smart guy/gal, tell me why I’m wrong. Especially if you are trying to build a company around this. Share:

Share:
Read Post

Friday Summary: September 10, 2010

I attended the OWASP Phoenix chapter meeting earlier this week, talking about database encryption. The crowd was small as the meeting was the Tuesday after Labor day, rather than the normal Thursday slot. Still, I had a good time, especially with the discussion afterwards. We talked about a few things I know very little about. Actually, there are several areas of security that I know very well. There are a few that I know reasonably well, but as I don’t practice them day to day I really don’t consider myself an expert. And there are several that I don’t know at all. And I find this odd, as it seemed that 15 years ago a single person could ‘know’ computer security. If you understood netword security, access controls, and crypto, you had a pretty good handle on things. Throw in some protocol design, injection, and pen test concepts and you were a freakin’ guru. Given the handful of people at the OWASP meeting, there were diverse backgrounds in the audience. After the presentation we were talking about books, tools, and approaches to security. We were talking about setting up labs and CTF training sessions. Somewhere during the discussion it dawned on me just how much things have changed; there are a lot of different subdisciplines in computer security. Earlier this week Marcus Carey (@marcusjcarey) tweeted “There is no such thing as a Security Expert”, which I have to grudgingly admit is probably true. Looking across the spectrum we have everything from reverse engineering malware to disk drive forensics. It’s reached a point where it’s impossible to be a ‘security’ expert, rather you are an application security expert, or a forensic auditor, or a cryptanalyst, or some other form of specialist. We’ve undergone several evolutionary steps in understanding how to compromise computer systems, and there are a handful of signs we are getting better at addressing bad security. The depth of research and knowledge in the field of computer security has progressed at a staggering rate, which keeps things interesting and means there is always something new to learn. With Rich in Babyland, the Labor Day holiday, and me travelling this week, you’ll have to forgive us for the brevity of this week’s summary: Webcasts, Podcasts, Outside Writing, and Conferences Seven Features To Look For In Database Assessment Tools. Adrian’s Dark Reading post. Favorite Securosis Posts Adrian Lane: Market For Lemons. Mike Rothman: This week’s Incite: Iconoclastic Idealism. Yes, voting for myself is lame, but it’s a good piece. Will be hanging on my wall as a reminder of my ideals. Other Securosis Posts New Release: Data Encryption 101 for PCI. Understanding and Selecting an Enterprise Firewall: Technical Architecture, Part 1. Understanding and Selecting an Enterprise Firewall: Application Awareness, Part 2. Favorite Outside Posts Adrian Lane: Interview Questions. I know it’s a week old, but I just saw it, and some of it’s really funny. Mike Rothman: Marketing to the Bottom of the Pyramid. We live a cloistered, ridiculously fortunate existence. Godin provides interesting perspective on how other parts of the world buy (or don’t buy) innovation. Project Quant Posts NSO Quant: Take the Survey and Win an iPad. NSO Quant: Manage IDS/IPS Process Revisited. NSO Quant: Manage IDS/IPS – Monitor Issues/Tune. Research Reports and Presentations Data Encryption 101: A Pragmatic Approach to PCI. White Paper: Understanding and Selecting SIEM/Log Management. White Paper: Endpoint Security Fundamentals. Top News and Posts IE 8 Bug. Vuln popped up late last Friday. Adobe Patches via Brian Krebs. Apple OS X Security Patch. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to ds, in response to FireStarter: Market for Lemons. I guess this could be read both ways… more insight as would be gained from researchers could help shift the ballance of information to the consumer, but it could also confirm the conclusion that a product was low quality. I don’t know of any related research that shows that consumer information helps improve consumer outcomes, though that would be interesting to see. Does anyone know if the “security seal” programs actually improve user’s perceptions? And do those perceptions materialize in greater adoption? Also may be interesting. I don’t think we need something like lemon laws for two reasons: 1) The provable cost of buying a bad product for the consumer is nominal; not likely to get any attention. The cost of the security product failing are too hard to quantify into actual numbers so I am not considering these. 2) Corporations that buy the really expensive security products have far more leverage to conduct pre-purchase evaluations, to put non-performance clauses into their contracts and to readily evaulate ongoing product suitability. The fact that many don’t is a seperate issue that won’t in any case be fixed by the law. Share:

Share:
Read Post

FireStarter: Market for Lemons

During BlackHat I proctored a session on “Optimizing the Security Researcher and CSO relationship. From the title and outline most of us assumed that this presentation would get us away from the “responsible disclosure” quagmire by focusing on the views of the customer. Most of the audience was IT practitioners, and most were interested in ways research findings might help the end customer, rather than giving them another mess to clean up while exploit code runs rampant. Or just as importantly, which threat is hype, and which threat is serious. Unfortunately this was not to be. The panel got (once again) mired in the ethical disclosure debate, with vendors and researchers staunchly entrenched in their positions. Irreconcilable differences: we get that. But speaking with a handful of audience members after the presentation I can say they were a little ticked off. They asked repeatedly how does this help the customers? To which they got a flippant answers to the effect “we get them boxes/patches as fast as we can”. Our contributing analyst Gunnar Peterson offered a wonderful parallel that describes this situation: The Market for Lemons. It’s an analysis of how uncertainty over quality changes a market. In a nutshell, the theory states that a vendor has a distinct advantage as they have knowledge and understanding of their product that the average consumer is incapable of discovering. The asymmetry of available information means consumers cannot judge good from bad, or high risk from low. The seller is incentivized to pass off low quality items as high quality (with premium pricing), and customers lose faith and consider all goods low quality, impacting the market in several negative ways. Sound familiar? How does this apply to security? Think about anti-virus products for a moment and tell me this isn’t a market for lemons. The AV vendors dance on the tables talking about how they catch all this bad stuff, and thanks to NSS Labs yet another test shows they all suck. Consider product upgrade cycles where customers lag years behind the vendor’s latest release or patch for fear of getting a shiny new lemon. Low-function security products, just as with low-quality products in general, cause IT to spend more time managing, patching, reworking and fixing clunkers. So a lot of companies are justifiably a bit gun-shy to upgrade to the latest & greatest version. We know it’s in the best interest of the vendors to downplay the severity of the issues and keep their users calm (jailbreak.me, anyone?). But they have significant data that would help the customers with their patching, workarounds, and operational security as these events transpire. It’s about time someone started looking at vulnerability disclosures from the end user perspective. Maybe some enterprising attorney general should stir the pot? Or maybe threatened legislation could get the vendor community off their collective asses? You know the deal – sometimes the threat of legislation is enough to get forward movement. Is it time for security Lemon Laws? What do you think? Discuss in the comments.   Share:

Share:
Read Post

New Release: Data Encryption 101 for PCI

We are happy to announce the availability of Data Encryption 101: A Pragmatic Approach to PCI Compliance. It struck Rich and myself that data storage is a central topic for PCI compliance which has not gotten a lot of coverage. The security community spends a lot of time discussing the merits of end-to-end encryption, tokenization, and other topics, but meat and potatoes stuff like encryption for data storage is hardly ever mentioned. We feel there is enough ambiguity in the standard to warrant deeper inspection into what merchants are doing to meet the PCI DSS requirements. For those of you who followed along with the blog series, this is a compilation of that content, but it has been updated to reflect all the comments we received and additional research, and the entire report was professionally edited. We especially want to thank our sponsor, Prime Factors, Inc., for stepping up and sponsoring this research! Without them, we couldn’t produce free research like this. As with all our papers, the content was developed independently and completely out in the open using our Totally Transparent Research process. The white paper is licensed under Creative Commons Attribution-Noncommercial-No Derivative Works 3.0. And in keeping with our ideals on privacy, we don’t require registration to download the paper so you don’t need to think up some clever pseudonym, turn off JavaScript, or worry about tracking cookies. Finally, we would like to thank Dan, Jay Jacobs, and Kevin Kenan; as well as those of you who emailed inquires and feedback; your participation helps us and the community.   Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.