Securosis

Research

There Are No Trusted Sites: New York Times Edition

Continuing our seemingly endless series on “trusted” sites that are compromised and then used to attack visitors, this week’s parasitic host is the venerable New York Times. It seems the Times was compromised via their advertising system (a common theme in these attacks) and was serving up scareware over the weekend (for more on scareware, and how to clean it, see Dancho Danchev’s recent article at the Zero Day blog). I recently had to clean up some scareware myself on my in-laws’ computer, but fortunately they didn’t actually pay for anything. Here are some of our previous entries in this series: BusinessWeek AMEX Paris Hilton McAfee Don’t worry, there are plenty more out there – these are just a few that struck our fancy. Share:

Share:
Read Post

Google and Micropayment

For a security blog, this is a little off topic. I recommend you stop reading if you consider my fascination with payment processing tiresome. Do any of you remember Project Xanadu? It was a precursosr to the world wide web, and envisioned as a way you could share documents and research. As I understand it, the project that died from trying to realize too many good ideas at once, and collapsed under the weight of its expectations. One of the ideas that came out of this project was the concept of micro-payments. I have spoken with team members from this project during its various phases, and been told that a micro-payment engine was being designed during the mid-90s to accommodate content providers who demanded they be paid to make their research available. I never did review the code released in 1998, so this is pure hearsay, or urban legend, or whatever you want to call it. Still, when word got out we working on a micro-payment engine at Transactor in 1997, there were warnings that people would not pay for content. In fact, the lesson seemed to be that much of the success of the web was due to the vast green fields of free information and community participation without cost. A lot has changed, but I still get that nagging feeling when I read about how Google’s proposed Micropayment System is going to help save publishers. Personally, I don’t think it will work. Not for the publishers. Not when the competitors give quality information away for free. Not when most users are reticent to even register, much less pay. But if a micropayment engine provides Google greater access to unique content, especially as it relates to newspapers, they win regardless. It becomes like Gmail in reverse. And on the flip side it extends the reach of their technology, establishing a financial relationship with everyday web users. Even if they don’t make a dime from sales commissions, it’s a brilliant idea as it promotes their existing business model. I told them as much in 2005 when I went through the second most bizarre interview process in my career. They have been playing footsie with this product idea for a long time and I have not figured out why they have been so slow to get a ‘beta’ product out there. There is room for competition and innovation in payment processing, but I remain convinced that micropayment has limited use cases, and news feeds is not a viable one. Share:

Share:
Read Post

Friday Summary – September 11, 2009

We announced the launch of the Contributing Analyst and Intern program earlier this week, with David Mortman and David Meier filling these respective roles. I think the very first Securosis blog comment I read was from Windexh8r (Meier), and Chris Hoff introduced me to David Mortman a couple years ago at RSA, so I am fortunately familiar with both our new team members. We are lucky to have people with such solid backgrounds wanting to join our open source research firm. Rich and I put up a blog post a few weeks ago and said, “Hey, want to learn how to be an analyst?” and far more people signed up than we thought, but the quality and and the depth of security experience of our applicants shocked us. That, and why they want to be analysts. I never considered being an analyst at any point in my career prior to joining Securosis. There were periods where I was not quite sure which path I would take in my line of work, so I experimented with several roles during my career (CTO, CIO, VP, Architect). It was a classic case of “the grass is always greener”, and I was always looking for a different challenge, and never quite satisfied. But here it is, some 15 months after joining Rich and I am enjoying the role of analyst. To tell you the truth, I am not really sure what the role is exactly, but I am having fun. This is not exactly a traditional analysis and research firm, so if you asked me the question “What does an analyst do?”, my answer would be very different than you’d get from an analyst for one of the big firms. A couple weeks ago when Rich and I decided to start the contributing analyst and intern positions, we understood we would have to train others to do what we do. Rich and I kind of share a vision for what we want to do, so there’s not a lot of discussion. Now we have to articulate and exemplify what we do for others. It dawned on me that I have been learning from Rich by watching. I had the research side down cold before I joined, but being on the receiving end of the briefings provides a stark contrast between vendor and analyst. I have been part of a few hundred press & analyst meetings over the years, and I understood my role as CTO was to describe what was new, why it mattered, and how it made customers happy. I never considered what it took to be on the other side of the table. To be harsh about it, I assumed most of the press and analysts were neither technical nor fully versed in customer issues because they had never been in the trenches, and really lacked the needed perspective to help either vendors or customers in a meaningful way. They could sniff out newsworthy items, but not why it mattered to the buyers. Working with Rich dispelled this myth. The depth and breadth of information we have access to is staggering. Plus Rich as an analyst possesses both the technical proficiency and the same drive (passion) to learn which good software developers and security researchers possess. Grasp the technology, product, and market; then communicate how the three relate; is a big part of what we do. And perhaps most importantly, he has the stomach to tell people the truth that their baby is ugly. Anyway, this phase of Securosis development is going to be good for me and I will probably end up learning as much of more than our new team members. I look forward to the new dimension David and David will bring. And with that, here is the week in review: Webcasts, Podcasts, Outside Writing, and Conferences Rich was quoted in SC Magazine on Trustwave’s acquisition of DLP vendor Vericept. Rich spoke last week at the Phoenix OWASP chapter. Favorite Securosis Posts Rich: My first rough cut post on data security in the cloud. I had another halfway finished, before our blog software ate it. I got bit in the aaS by our SaaS. Adrian: I have been wanting to talk about Format and Datatype Preserving Encryption for the last three months and finally got the chance to finish the research. Other Securosis Posts Say Hello to the New (Old) Guys Data Protection Decisions Seminar in DC next week! Critical MS Vulnerabilities – September 2009 Cloud Data Security Cycle: Create (Rough Cut) Project Quant Posts Project Quant Survey Results and Analysis Raw Project Quant Survey Results Favorite Outside Posts Adrian: Bruce Schneier’s post on File Deletion highlights the issues around data retention in Cloud/SaaS environments. Rich: Amrit Williams and Peter Kyper on the state of the security industry. Top News and Posts Critical Microsoft Vulnerabilities grab the headlines this week. Ryan Naraine’s update on one of the vulnerabilities. Some Defenses for the TCP DoS vulnerabilities posted at Dark Reading. Ignoring the article hype angle, cross VM hacking is interesting research, even if unrealistic. Government to accept Yahoo, Google and Paypal credentials. Holy hackers, Batman, it’s full of holes. You know, holey. Nice post on Ars Technica on Anonymization and data obfuscation. Trustwave acquires Vericept. iPhone 3.1 anti-phishing seems to be working (or not) oddly. Firefox will now check your Flash version, which is pretty darn awesome and should be in every browser. Court allows woman to sue bank after her account is leeched. Expect to see more of this, since this sort of crime is dramatically increasing. Ever travel? Check out everything the TSA stores about you. Blog Comment of the Week This week’s best comment comes from pktsniffer in response to Format and Datatype Preserving Encyrption: Your right on the money. We had Voltage in recently to give us their encryption pitch. It was the ease of deployment using FFSEM that they were ‘selling’. I too have concerns regarding the integrity of the encryption but from an ease

Share:
Read Post

Data Protection Decisions Seminar in DC next week!

Rich and I are going to be at TechTarget’s Washington DC Data Protection Decisions Seminar on September 15th. We will be presenting on the following subjects: Pragmatic Data Security Database Activity Monitoring Understanding and Selecting a DLP Solution Data Encryption It is being held at the Sheraton National in Arlington. If you are interested in attending there is more information on the TechTarget site. Heck, I even think you earn CPE credits for listening. While it’s going to be a brief stay for both of us, let us know if you’re in town so we can catch up. Share:

Share:
Read Post

Say Hello to the New (Old) Guys

A little over a month ago we decided to try opening up an intern and Contributing Analyst program. Somewhat to our surprise, we ended up with a bunch of competitive submissions, and we’ve been spending the past few weeks performing interviews and running candidates through the ringer. We got all mean and even made them present some research on a nebulous topic, just to see what they’d come up with. It was a really tough decision, but we decided to go with one intern and one Contributing Analyst. David Meier, better known to most of you as Windexh8r, starts today as the very first Securosis intern. Dave was a very early commenter on the blog, has an excellent IT background, and helped us create the ipfw firewall rule set that’s been somewhat popular. He blogs over at Security Stallions, and we’re pretty darn excited he decided to join us. He’s definitely a no-BS kind of guy who loves poking holes in things and looking for unique angles of analysis. We’re going to start hazing him as soon as he sends the last paperwork over (with that liability waver). We’re hoping he’s not really as good as we think, or we’ll have to promote him and find another intern to beat. David Mortman, the CSO-in-Residence of Echelon One, and a past contributor to this blog, is joining us as our first Contributing Analyst. David’s been a friend for years now, and we even split a room at DefCon. Since I owed David a serious favor after he covered the blog for me while I was out last year for my shoulder surgery, he was sort of a shoe-in for the position. He has an impressive track record in the industry, and we are extremely lucky to have him. You might also know David as the man behind the DefCon Security Jam, and he’s a heck of a bread baker (and cooker of other things, but I’ve only ever tried his bread). Dave and David (yeah, we know) can be reached at dmeier@securosis.com, and dmortman@securosis.com (and all their other email/Twitter/etc. addresses). You’ll start seeing them blogging and participating in research over the next few weeks. We’ve gone ahead and updated their bios on our About page, and listed any conflicts of interest there. (Interns and Contributing Analysts are included under our existing NDAs and confidentiality agreements, but will be restricted from activities, materials, and coverage of areas where they have conflicts of interest). Share:

Share:
Read Post

Format and Datatype Preserving Encryption

That ‘pop’ you heard was my head exploding after trying to come to terms with this proof on why Format Preserving Encryption (FPE) variants are no less secure than AES. I admitted defeat many years ago as a cryptanalyst because, quite frankly, my math skills are nowhere near good enough. I must rely on the experts in this field to validate this claim. Still, I am interested in FPE because it was touted as a way to save all sorts of time and money with database encryption as, unlike other ciphers, if you encrypted a small number, you got a small number or hex value back. This means that you did not need to alter the database to handle some big honkin’ string of ciphertext. While I am not able to tell you if this type of technology really provides ‘strong’ cryptography, I can tell you about some of the use cases, how you might derive value, and things to consider if you investigate the technology. And as I am getting close to finalizing the database encryption paper, I wanted to post this information before closing that document for review. FPE is also called Datatype Preserving Encryption (DPE) and Feistel Finite Set Encryption Mode (FFSEM), amongst other names. Technically there are many labels to describe subtle variations in the methods employed, but in general these encryption variants attempt to retain the same size, and in some cases data type, as the original data that is being encrypted. For example, encrypt ‘408-555-1212’ and you might get back ‘192807373261’ or ‘a+3BEJbeKL7C’. The motivation is to provide encrypted data without the need to change all of the systems that use that data; such as database structure, queries, and all the application logic. The business justification for this type of encryption is a little foggy. The commonly cited reasons you would consider FPE/DTP are: a) if changing the database or other storage structures are impossibly complex or cost prohibitive, or b) if changing the applications that process sensitive data would be impossibly complex or cost prohibitive. Therefore you need a way to protect the data without requiring these changes. The cost you are looking to avoid is changing your database and application code, but on closer inspection this savings may be illusory. Changing the database structure for most is a simple alter table command, along with changes to a few dozen queries and some data cleanup and you are done. For most firms that’s not so dire. And regardless of what form of encryption you choose, you will need to alter application code somewhere. The question becomes whether an FPE solution will allow you to minimize application changes as well. If the database changes are minimal and FPE requires the same application changes as non-FPE encryption, there is not a strong financial incentive to adopt. You also need to consider tokenization, wherein you remove the sensitive data completely – for example by replacing credit card numbers with tokens which each represent a single CC#. As the token can be of an arbitrary size and value to fit in with the data types you already use, it has most of the same benefits as a FPE in terms of data storage. Most companies would rather get rid of the data entirely if they can, which is why many firms we speak with are seriously investigating, or already plan to adopt, tokenization. It costs about the same and there is less risk if credit cards are removed entirely. Two vendors currently offer products in this area: Voltage and Protegrity (there may be more, but I am only aware of these two). Each offers several different variations, but for the business use cases we are talking about they are essentially equivalent. In the use case above, I stressed data storage as the most frequently cited reason to use this technology. Now I want to talk about another real life use case, focused on moving data, that is a little more interesting and appropriate. You may remember a few months ago when Heartland and Voltage produced a joint press release regarding deployment of Voltage products for end to end encryption. What I understand is that the Voltage technology being deployed is an FPE variant, not one of the standard implementations of AES. Sathvik Krishnamurthy, president and chief executive officer of Voltage said “With Heartland E3, merchants will be able to significantly reduce their PCI audit scope and compliance costs, and because data is not flowing in the clear, they will be able to dramatically reduce their risks of data breaches.” The reason I think this is interesting, and why I was reviewing the proof above, is that this method of encryption is not on the PCI’s list of approved ‘strong’ cryptography ciphers. I understand that NIST is considering the suitability of the AES variant FFSEM (pdf) as well as DTP (pdf) encryption, but they are not approved at this time. And Voltage submitted FFSEM, not FPE. Not only was I a little upset at letting myself be fooled into thinking that Heartland’s breach was accomplished through the same method as Hannaford’s – which we now know is false – but also for taking the above quote at face value. I do not believe that the network outside of Heartland comes under the purview of the PCI audit, nor would the FPE technology be approved if it did. It’s hard to imagine this would greatly reduce their PCI audit costs unless their existing systems left the data open to most internal applications and needed a radical overhaul. That said, the model which Voltage is prescribing appears to be ideally suited for this technology: moving sensitive data securely across multi-system environments without changing every node. For data encryption to address end to end issues in Hannaford and similar types of breach responses, FPE would allow for all of the existing nodes to continue to function along the chain, passing encrypted data from POS to payment processor. It does not require

Share:
Read Post

Cloud Data Security Cycle: Create (Rough Cut)

Last week I started talking about data security in the cloud, and I referred back to our Data Security Lifecycle from back in 2007. Over the next couple of weeks I’m going to walk through the cycle and adapt the controls for cloud computing. After that, I will dig in deep on implementation options for each of the potential controls. I’m hoping this will give you a combination of practical advice you can implement today, along with a taste of potential options that may develop down the road. We do face a bit of the chicken and egg problem with this series, since some of the technical details of controls implementation won’t make sense without the cycle, but the cycle won’t make sense without the details of the controls. I decided to start with the cycle, and will pepper in specific examples where I can to help it make sense. Hopefully it will all come together at the end. In this post we’re going to cover the Create phase: Definition Create is defined as generation of new digital content, either structured or unstructured, or significant modification of existing content. In this phase we classify the information and determine appropriate rights. This phase consists of two steps – Classify and Assign Rights. Steps and Controls   < div class=”bodyTable”> Control Structured/Application Unstructured Classify Application Logic Tag/Labeling Tag/Labeling Assign Rights Label Security Enterprise DRM   Classify Classification at the time of creation is currently either a manual process (most unstructured data), or handled through application logic. Although the potential exists for automated tools to assist with classification, most cloud and non-cloud environments today classify manually for unstructured or directly-entered database data, while application data is automatically classified by business logic. Bear in mind that these are controls applied at the time of creation; additional controls such as access control and encryption are managed in the Store phase. There are two potential controls: Application Logic: Data is classified based on business logic in the application. For example, credit card numbers are classified as such based on on field definitions and program logic. Generally this logic is based on where data is entered, or via automated analysis (keyword or content analysis) Tagging/Labeling: The user manually applies tags or labels at the time of creation e.g., manually tagging via drop-down lists or open fields, manual keyword entry, suggestion-assisted tagging, and so on. Assign Rights This is the process of converting the classification into rights applied to the data. Not all data necessarily has rights applied, in which cases security is provided through additional controls during later phases of the cycle. (Technically rights are always applied, but in many cases they are so broad as to be effectively non-existent). These are rights that follow the data, as opposed to access controls or encryption which, although they protect the data, are decoupled from its creation. There are two potential technical controls here: Label Security: A feature of some database management systems and applications that adds a label to a data element, such as a database row, column, or table, or file metadata, classifying the content in that object. The DBMS or application can then implement access and logical controls based on the data label. Labels may be applied at the application layer, but only count as assigning rights if they also follow the data into storage. Enterprise Digital Rights Management (EDRM): Content is encrypted, and access and use rights are controlled by metadata embedded with the content. The EDRM market has been somewhat self-limiting due to the complexity of enterprise integration and assigning and managing rights. Cloud SPI Tier Implications Software as a Service (SaaS) Classification and rights assignment are completely controlled by the application logic implemented by your SaaS provider. Typically we see Application Logic, since that’s a fundamental feature of any application – SaaS or otherwise. When evaluating your SaaS provider you should ask how they classify sensitive information and then later apply security controls, or if all data is lumped together into a single monolithic database (or flat files) without additional labels or security controls to prevent leakage to administrators, attackers, or other SaaS customers. In some cases, various labeling technologies may be available. You will, again, need to work with your potential SaaS provider to determine if these labels are used only for searching/sorting data, or if they also assist in the application of security controls. Platform as a Service (PaaS) Implementation in a PaaS environment depends completely on the available APIs and development environment. As with internal applications, you will maintain responsibility for how classification and rights assignment are managed. When designing your PaaS-based application, identify potential labeling/classification APIs you can integrate into program logic. You will need to work with your PaaS provider to understand how they can implement security controls at both the application and storage layers – for example, it’s important to know if and how data is labeled in storage, and if this can be used to restrict access or usage (business logic). Infrastructure as a Service (IaaS) Classification and rights assignments depend completely on what is available from your IaaS provider. Here are some specific examples: Cloud-based database: Work with your provider to determine if data labels are available, and with what granularity. If they aren’t provided, you can still implement them as a manual addition (e.g., a row field or segregated tables), but understand that the DBMS will not be enforcing the rights automatically, and you will need to program management into your application. Cloud-based storage: Determine what metadata is available. Many cloud storage providers don’t modify files, so anything you define in an internal storage environment should work in the cloud. The limitation is that the cloud provider won’t be able to tie access or other security controls to the label, which is sometimes an option with document management systems. Enterprise DRM, for example, should work fine with any cloud storage provider. This should give you a good idea of how to manage classification and

Share:
Read Post

Critical MS Vulnerabilities – September 2009

Got an IM from Rich today: “nasty windows flaw out there – worst in a long time”. I looked over the Microsoft September Security Bulletin and what was posted this morning on their Security Research and Defense blog, and it was clear he is right. MS09-045 and MS09-046 are both “drive-by style” vulnerabilities. The attack vector is most likely malicious websites hosting specially-crafted JavaScript (MS09-045) or malicious use of the DHTML ActiveX control (MS09-046) to infect browsing users. Vulnerabilities that confuse the script engine can be tough to reverse-engineer from the update so it may take a while for attackers to discover and weaponize. We still might see a reliable exploit within 30 days, hence the “1” … The attack vector for both CVE’s addressed by MS09-047 is most likely again a malicious website but these vulnerabilities could also be exploited via media files attached to email. When a victim double-clicks the attachment and clicks “Open” on the dialog box, the media file could hit the vulnerable code. I started writing up an analysis of the remotely exploitable threats, which can completely hose your system, when it dawned on me that technical analysis in this case is irrelevant. I hate to get all “Uh, remote code execution is bad, mmmkay” as that is unhelpful, but I think in this case, simplicity is best. Patch your Vista and Windows machines now! If you need someone else to tell you “Yeah, you’re screwed, patch now”, there is a nice post on the MSRC blog you can check out. If there is not an exploit in the wild already, I am not as optimistic as the MS staff, and think we will probably see something by week’s end. Share:

Share:
Read Post

Friday Summary – September 4, 2009

As much as I love what I do, it’s turned me into a cynical bastard. And no, I don’t mean skeptical, which we’ve talked about before (the application of critical thinking to determine truth), but truly cynical (everyone is a right bastard who will fleece you for everything you’re worth if given the opportunity). While I think both skepticism and cynicism are important traits for a security professional, they do have their downside… especially cynicism. Marketing, for example, really pisses cynics off – even the regular ole’ marketing that finds its way onto every available surface capable of supporting a sticker, poster, or other form of advertising. Even enjoying movies and such is a bit harder (Star Trek nearly lost me completely with that Nokia bit). Don’t even get me started on blatant manipulation of emotions come Emmy/Oscar time. But credulity is a core aspect of the human experience. You can’t maintain social relationships without a degree of trust, and you can’t enjoy any form of entertainment without the ability to suspend disbelief. That’s why I’m a complete nut-job of a Parrothead. Although I know that behind all Margaritaville blenders there’s some guy making absolutely silly money, I don’t care. I’ve put my stake in the ground and decided that here and now I will suspend my cynicism and completely buy into some fantasy world propagated by a corporate entity. And I love every minute of it. I’ve been a Parrothead since high school, and it’s frightening how influential Jimmy Buffett ended up being on my life. His music got me through paramedic school, and has always helped me escape when life veered to the stressful. Six years ago I met my wife at a Jimmy Buffett concert, our first date was at a show, and we got engaged on a trip to Hawaii for a show. Yes, I’ve blown massive amounts of cash on CDs, DVDs, decorative glassware, and various home decor items featuring palm trees and salt shakers, but I figure Mr. Buffett has earned every cent of it with the enjoyment he’s brought into my life. That’s why, although I’ve met plenty of celebrities over the years (mostly work related), I nearly peed myself when I was grabbed from the backstage pre-show last weekend and told it was time to meet Jimmy. A few years ago a friend of mine was the network admin for the South Pole, and he sent a video to margaritaville.com of some of the Antarctic parrotheads while Jimmy was on his Party at the End of the World tour. They played it all over the country, and when Erik decided to go to the show with us he casually emailed his contact there. Next thing you know we have 10th row seats, backstage passes, and Jimmy wants to meet Erik. Since I took him to his first Buffett show, he grabbed me when they told him he could bring a friend. We spent a few minutes in Jimmy’s dressing room, and I mostly listened as they talked Antarctica. It was an amazing experience, and reminded me why sometimes it’s okay to suspend the cynicism and just enjoy the ride. I won’t ruin the moment by trying to tie this to some sort of analogy or life lesson. The truth is I met Jimmy Buffett, it was totally freaking awesome, and nothing else matters. Don’t forget that you can subscribe to the Friday Summary via email. And now for the week in review: Webcasts, Podcasts, Outside Writing, and Conferences Adrian wrote Truth, lies and fiction about encryption for Information Security Magazine (he did the hard work, I only helped with some of the edits). Rich was quoted on Mac security in the New York Times Gadgetwise Blog. Rich and Martin on The Network Security Podcast. Favorite Securosis Posts Rich: My start on Data Security in the Cloud. I think I’ve finally figured out a framework for this, and will be blogging the heck out of it over the coming weeks. Adrian: Part 6 of Understanding and Choosing a Database Assessment Solution. Other Securosis Posts Sentrigo and MS SQL Server Vulnerability Musings on Data Security in the Cloud OWASP and SunSec Announcement Project Quant Posts Raw Project Quant Survey Results Favorite Outside Posts Adrian: Robert Graham as an interesting article on using DMCA counter-claims. Rich: Jack Daniel on the evisceration of the Massachusetts security/privacy law Top News and Posts Microsoft IIS FTP flaw Smart grid hacking Major Twitter flaw Security fundamentals apply to virtualization Faster WiFi cracking (only affects WPA, not WPA2) Panaera gift card (in)security Blog Comment of the Week This week’s best comment comes from ds in response to Musings on Data Security in the Cloud: Good post, I couldn’t agree more. I think a lot of the fear of cloud security is that, for many security pros, this paradigm shift changes the way that they work, makes existing skill sets less relevant and demands they learn new ones. They raise issues of trust and quality much as other IT pros have when faced with other types of sourcing options, but miss the facts that it is our job to determine the trustworthiness of any solution, internal or external and that an internal solution isn’t inherently trusted just because we go to lunch with the people who implement and manage it. Share:

Share:
Read Post

Understanding and Choosing a Database Assessment Solution, Part 6: Administration

Reporting for compliance and security, job scheduling, and integration with other business systems are the topics this post will focus on. These are the features outside the core scanning function that make managing a database vulnerability assessment product easier. Most database assessment vendors have listed these features for years, but they were implemented in a marketing “check the box” way, not really to provide ease of use and not particularly intended to help customers. Actually, that comment applies to the products in general. In the 2003-2005 time frame, database assessment products pretty much sucked. There really is no other way to capture the essence of the situation. They had basic checks for vulnerabilities, but most lacked security best practices and operational policies, and were insecure in their own right. Reliability, separation of duites, customization, result set management, trend analysis, workflow, integration with reporting or trouble-ticketing – for any of these, you typically had to look elsewhere. Application Security’s product was the best of a bad lot, which included crappy offerings from IPLocks, NGS, ISS, nTier, and a couple others. I was asked the other day “Why are you writing about database assessment? Why now? Don’t most people know what assessment is?” There are a lot of reasons for this. Unlike DAM or DLP, we’re not defining and demystifying a market. Database security and compliance requirements have been at issue for many years now, but only recently have platforms have matured sufficiently to realize their promise. These are not funky little homegrown tools any longer, but maturing into enterprise-ready products. There are new vendors in the space, and (given some of the vendor calls we get) several more will join the mix. They are bringing considerable resources to table beyond what the startups of 5 years ago were capable of, integrating the assessment feature into a broader security portfolio of preventative and detective controls. Even the database vendors are starting to take notice and invest in their products. If you reviewed database assessment products more than two years ago and were dissatisfied, it’s time for another look. On to some of the management features that warrant closer review: Reporting As with nearly any security tool, you’ll want flexible reporting options, but pay particular attention to compliance and auditing reports, to support compliance needs. What is suitable for the security staffer or administrator may be entirely unsuitable for a different internal audience, both in content and level of detail. Further, some products generate one or more reports from scan results while others tie scan results to a single report. Reports should fall into at least three broad categories: compliance and non-technical reports, security reports (incidents), and general technical reports. Built-in report templates can save valuable time by not only grouping together the related policies, providing the level of granularity you want. Some vendors have worked with auditors from the major firms to help design reports for specific regulations, like SOX & PCI, and automatically generate reports during an audit. If your organization needs flexibility in report creation, you may exceed the capability of the assessment product and need to export the data to a third party tool. Plan on taking some time to analyze built-in reports, report templates, and report customization capabilities. Alerts Some vendors offer single policy alerts for issues deemed critical. These issues can be highlighted and escalated independent of other reporting tools, providing flexibility in how to handle high priority issues. Assessment products are considered a preventative security measure, and unlike monitoring, alerting is not a typical use case. Policies are grouped by job function, and rather than provide single policy scanning or escalation internally, critical policy failures are addressed through trouble-ticketing systems, as part of normal maintenance. If your organization is moving to a “patch and shield” model, prioritized policy alerts are a long-term feature to consider. Scheduling You will want to schedule policies to run on a periodic basis, and all of the platforms provide schedulers to launch scans. Job control may be provided internally, or handled via external software or even as “cron jobs”. Most customers we speak with run security scans on a weekly basis, but compliance scans vary widely. Frequency depends upon type and category of the policy. For example, change management / work order reconciliation is a weekly cycle for some companies, and a quarterly job at others. Vendors should be able to schedule scans to match your cycles. Remediation & Integration Once policy violation are identified, you need to get the information into the right hands so that corrective action can be taken. Since incident handlers may come from either a database or a security background, look for a tool that appeals to both audiences and supplies each with the information they need to understand incidents and investigate appropriately. This can be done through reports or workflow systems, such as Remedy from BMC. As we discussed in the policy section, each policy should have a thorough description, remediation instructions, and references to additional information. Addressing all of the audiences may be a policy and report customization effort for your team. Some vendors provide hooks for escalation procedures and delivery to different audiences. Others use relational databases to store scan results and can be directly integrated into third-party systems. Result Set Management All the assessment products store scan results, but differ on where and how. Some store the raw data retrieved from the database, some store the result of a comparison of raw data against the policy, and still others store the result within a report structure. Both for trend analysis, and pursuant to certain regulatory requirements, you might need to store scan results for a period of a year or more. Depending upon how these results are stored, the results and the reports may change with time! Examine how the product stores and retrieves prior scan results and reports as they may keep raw result data, or the reports, or both. Regenerated reports might be different if the policies they were mapped

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.