Securosis

Research

Understanding and Selecting DAM 2.0: Market Drivers and Use Cases

I was going to being this series talking about some of the architectural changes, but I’ve reconsidered. Since our initial coverage of Database Activity Monitoring technology in 2007, the products have fully matured into enterprise worthy platforms. What’s more, they’ve proven significant security and compliance benefits, as evidenced by market growth from $40M to revenues well north of $100M per year. This market is no longer dominated by small vendors, rather large vendors who have acquired six of the DAM startups. As such, DAM is being integrated with other security products into a blended platform. Because of this, I thought it best to go back and define what DAM is, and discuss market evolution first as it better frames the remaining topics we’ll discuss rest of this series. Defining DAM Our longstanding definition is: Database Activity Monitors capture and record, at a minimum, all Structured Query Language (SQL) activity in real time or near real time, including database administrator activity, across multiple database platforms, and can generate alerts on policy violations. While a number of tools can monitor various level of database activity, Database Activity Monitors are distinguished by five features: The ability to independently monitor and audit all database activity including administrator activity and SELECT transactions. Tools can record all SQL transactions: DML, DDL, DCL, (and sometimes TCL) activity. The ability to store this activity securely outside of the database. The ability to aggregate and correlate activity from multiple, heterogeneous Database Management Systems (DBMS). Tools can work with multiple DBMS (e.g.,Oracle, Microsoft, IBM) and normalize transactions from different DBMS despite differences in their flavors of SQL. The ability to enforce separation of duties on database administrators. Auditing activity must include monitoring of DBA activity, and solutions should prevent DBA manipulation of and tampering with logs and activity records. The ability to generate alerts on policy violations. Tools don’t just record activity, they provide real-time monitoring, analysis and rule-based alerting. For example, you might create a rule that generates an alert every time a DBA performs a SELECT query on a credit card column that returns more than 5 results. DAM tools are no longer limited to a single data collection method, rather they offer network, OS layer, memory scanning and native audit layer support. Users can tailor their deployment to their security and performance requirements, and collect data from sources best fit their requirements. Platforms Reading that you’ll notice few differences from what was discussed in 2007. Further, we predicted the evolution of Applications and Database Security & Protection (ADMP) on the road to Content Monitoring and Protection, stating “DAM will combine with application firewalls as the center of the applications and database security stack, providing activity monitoring and enforcement within databases and applications.” But where it gets interesting is the other – different- routes vendors are taking to achieve this unified model. It’s how vendors bundle DAM into a solution that distinguishes one platform from another. The Enterprise Data Management Model – In this model, DAM features are generically extended to many back-office applications. Data operations, such as a a file read or SAP transaction, are treated just like a database query. As before, operations are analyzed to see if a rule was violated, and if so, a security response is triggered. In this model DAM does more than alerting and blocking, but leverages masking, encryption and labeling technologies to address security and compliance requirements. This model relies heavily on discovery to help administrators locate data and define usage policies in advance. While in many respects similar to SIEM – the model leans more toward real time analysis of data usage. There is some overlap with DLP, but this model lacks endpoint capabilities and full content awareness. The ADMP Model – What’s sometimes called the Web AppSec model, here DAM is linked with web application firewalls to provide activity monitoring and enforcement within databases and applications. DAM protects content in a structured application and database stack, WAF shields application functions from misuse and injection attacks, and File Activity Monitoring (FAM) protects data as it moves in and out of documents or unstructured repositories. This model is more application aware than the others, reducing false positives through transactional awareness. the ADMP model also provides advanced detection of web borne threats. Policy Driven Security Model – Classic database security workflow of discovery, assessment, monitoring and auditing; each function overlapping with the next to pre-generate rules and policies. In this model, DAM is just one of many tools to collect and analyze events, and not necessarily central to the platform. What’s common amongst vendors who offer this model is policy orchestration: policies are abstracted from the infrastructure, with the underlying database – and even non-database – tools working in unison to fulfill the security and compliance requirements. How work gets done is somewhat hidden from the user. This model is great for reducing the pain of creating and managing policies, but as the technologies are pre-bundled, lacks the flexibility of other platforms. The Proxy Model – Here DAM sits in front of the database, filtering inbound requests, acting as a proxy server. What’s different is what the proxy does with inbound queries. In some cases the query is blocked because it fits a known attack signature, and DAM acts as a firewall to protect – a method sometimes called ‘virtual patching’ – the database. In other cases the query is not forwarded to the database because the DAM proxy has recently seen the same request, and returns query results directly to the calling application. DAM is in essence a cache to speed up performance. Some platforms also provide the option of rewriting inbound queries, either to optimize performance or to minimize the risk an inbound query is malicious. DAM tools have expanded into other areas of data, database and application security. Market Drivers DAM tools are extremely flexible and often deployed for what may appear to be totally unrelated reasons. Deployments are typically driven by one of three drivers: Auditing for

Share:
Read Post

Tokenization Guidance: PCI Requirement Checklist

So far in this series on tokenization guidance for protecting payment data, we have covered deficiencies in the PCI supplement, offered specific advice for merchants to reduce audit scope, and provided specific tips on what to look for during an audit. In this final post we will provide a checklist of each PCI requirement affected by tokenization, with guidance on how to modify compliance efforts in light of tokenization. I have tried to be as brief as possible while still covering the important areas of compliance reporting you need to adjust. Here is our recommended PCI requirements checklist for tokenization: PCI Requirement   Recommendation 1.2 Firewall Configuration   Token server should restrict all IP traffic – to and from – to systems specified under the ‘ground rules’, specifically: *      Payment processor *      PAN storage server (if separate) *      Systems that request tokens *      Systems that request de-tokenization This is no different than PCI requirements for the CDE, but it’s recommended that these systems communicate only with each other. If the token server is on site, Internet and DMZ access should be limited to communications with payment processor.   2.1 Defaults Implementation for most of requirement 2 will be identical, but section 2.1 is most critical in the sense there should be no ‘default’ accounts or passwords for the tokenization server. This is especially true for systems that are remotely managed or have remote customer-care options. All PAN security hinges on effective identity and access management, so establishing unique accounts with strong pass-phrases is essential.   2.2.1 Single function servers 2.2.1 bears mention both from a security standpoint, as well as protection from vendor lock-in. For security, consider an on-premise token server as a standalone function, separate and distinct from applications that make tokenization and de-tokenization requests.   To reduce vendor lock-in, make sure the token service API calls or vendor supplied libraries used by your credit card processing applications are sufficiently abstracted to facilitate switching token services without significant modifications to your PAN processing applications.   2.3 Encrypted communication You’ll want to encrypt non-console administration, per the specification, but all API calls to the token service as well. It’s also important to consider, when using multiple token servers to support failover, scalability or multiple locations that all synchronization occurs over encrypted communications, preferably a dedicated VPN with bi-directional authentication.   3.1 Minimize PAN storage The beauty of tokenization is it’s the most effective solution available for minimizing PAN storage. By removing credit card numbers from every system other than the central token server, you cut the scope of your PCI audit. Look to tokenize or remove every piece of cardholder data you can, and keeping it in the token server as well. You’ll still meet business, legal and regulatory requirements while improving security.   3.2 Authentication data Tokenization does not circumvent this requirement; you must remove sensitive authentication data per sub-sections 3.2.X 3.3 Masks Technically you are allowed to preserve the first six (6) digits and the last four (4) digits of the PAN. However, it’s our recommendation that you examine your business processing requirements to determine if you can fully tokenize the PAN, or at a minimum, only preserve the last four digits for customer verification. The number of possible tokens you can generate with the remaining six digits is too small for many merchants to generate quality random numbers. Please refer to ongoing public discussions on this topic for more information.   When using a token service from your payment processor that you ask for single-use tokens to avoid possible cases of cross-vendor fraud.   3.4 Render PAN unreadable One of the principle benefits of tokenization is that it renders PAN unreadable. However, auditing environments with tokenization requires two specific changes: 1.     Verify that PAN data is actually swapped for tokens in all systems. 2.     For on-premise token servers, verify that the token server adequately encrypts stored PAN, or offers an equivalent form of protection, such as not storing PAN data*.   We do not recommend hashing as it offers poor PAN data protection. Many vendors’ store hashed PAN values in the token database as a means of speedy token lookup, but while common, it’s a poor choice. Our recommendation is to encrypt PAN data, and as many use databases to store information, we believe table, column or row level within the token database is your best bet. Use of full database or file layer encryption can be highly secure, but most solutions offer no failsafe protection when database or token admin credentials are compromised.   We acknowledge our recommendations differ from most, but experience taught us to err on the side of caution when it comes to PAN storage.   *Select vendors offer one-time pad and codebooks options that don’t require PAN storage.   3.5 Key management Token server encrypt the PAN data stored internally, so you’ll need to verify the supporting key management system as best you can. Some token servers offer embedded key management, while others are designed to leverage your existing key management services.   There are very few people who can adequately evaluate key management systems to ensure they are really secure, but at the very least, you can check that the vendor is using industry standard components, or has validated their implementation with a 3rd party. Just make sure they are not storing the keys in the token database unencrypted. It happens.   4.1 Strong crypto on Internet communications As with requirement 2.3, when using multiple token servers to support failover, scalability and/or multi-region support that all synchronization occurs over encrypted communications, preferably a dedicated VPN with bi-directional authentication.   6.3 Secure development One site or 3rd party token servers will both introduce new libraries and API calls into your environment. It’s critical that your development process includes the validation that what you put into production is secure. You can’t take the vendor’s word for it – you’ll need to validate that all defaults, debugging code and API calls are secured. Ultimately

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.