Rich and I – with help from Chris Pepper – compiled the Understanding and Selecting a Database Security Platform series into a research paper, and provided it to a number of people for initial review. We got a lot of valuable feedback and observations back. Commenters felt several topics were under-served, they believe others were over-emphasized, and more we failed to mention. We’re not too proud to admit when we’re wrong, or when we failed to capture the essence of customer buying decisions, so we are happy to revisit these topics. We believe their feedback improves the paper quite a bit. In keeping with our Totally Transparent Research process we want all discussions that affect the paper out in the open, so we are posting those comments here for review. If you have additional comments, or responses to anything here, we encourage you to chime in.
This series took longer to produce than most of our other research papers, and some readers had trouble following along from beginning to end. For the sake of continuity I have listed all the blog posts:
- Understanding and Selecting a Database Security Platform: Introduction
- Defining DSP
- Core components and the evolution of DAM to DSP
- Event collection
- Technical architecture
- Core features
- Extended features
- Administration and management
- Use cases
And for reference, the original Understanding and Selecting a Database Activity Monitoring Solution research paper and the first DAM 2.0 posts offer additional insight. Once we have discussed all the comments and pulled all relevant feedback into the paper, we will release the final version.
Reader interactions
9 Replies to “Understanding and Selecting a Database Security Platform: Comments and Series Index”
Regarding use cases: “To monitor database administrators. This is often the single largest reason to use a DSP product in a compliance project.” – Rather than limit this to just DBAs, we think “privileged users” better captures the target of monitoring here. In fact, you might even generalize that out to “separation of duties” (e.g., finance guy querying HR data).
Regarding SQL injection in the section “Web application security” – WAF is a first line of defense for SQL injections.
Regarding “Audit Logs” – We see the lack of separation of duties as one of the biggest issues with using audit logs.
Regarding “Hierarchical Architecture” – May also want to mention Service Providers as DSP is deployed as a managed service.
Regarding: “Blocking Options” – Blocking capabilities can be with agents and via transparent, inline bridging. Some of the advantages of an inline bridge include real-time blocking, the option to “fail open” and minimal changes to data center infrastructure.
Regarding “Define Needs” / “Selection Committee” -May also want to mention Lines of Business here. They often have the money that funds these projects..”
* **On FAM/DAM/WAF**: We’re re-reading the paper in light of your comment on FAM/DAM/WAF — others have made the same comment — and we see your point. I want to stress it wasn’t our intention to portray this as a common combination in practice today. Our goal was to introduce the new methods of analysis that are being bundled in DSP platforms. And depending upon the bundle of DSP capabilities, along with integration of other products, show the multiple paths we see the customers taking to meet their own requirements.
In a nutshell people buy DSP for audit/compliance (the bulk of the market), some buy for app security, and some for datacenter/operations security. Each path has different feature focuses, and we see most of the vendors aligning this way as well (long term). We agree with you that our example integrations comes across as “you have to have WAF/FAM” and we will clean up the wording. But to push back, we aren’t seeing a lot of clients with big projects that include SIEM/IPS/DLP/Policy Management all as one big selection stack either. We see more workflow systems coupled with DSP than we do IPS, DLP and Policy Management combined. Policy and SIEM overlays are common across the different use cases, but loosely coupled deployments.
Hopefully this makes it clear that our intention isn’t to reflect attachment rates; we want to talk about security features we see being more deeply integrated into DSP.
We’ll clean up the wording and let us know what you think. That part of the paper is meant to give the readers an idea of general divergence we are seeing in the market.
* **On virtual patching**: we’re cleaning this up to include network agents (which aren’t used much anymore), kernel agents, and memory scanning. This should also be referenced back to the ‘core features’ as any monitor that is based solely on network activity is missing critical data for key compliance use cases.
* **On software vs. hardware**: Both issues raised are relevant to buying decisions, not so much for the understanding of the available architectures. These tradeoffs should be mentioned under the DSP Selection Process section, likely in the evaluation phase. It isn’t our place to judge one install type over the other since our customer references are all over the map and all the providers make the same claims. There are so many counter-claims here with customers skewing every direction after their selection process that even we can’t say definitively one product is better or worse overall. (It used to be easier, but everyone has advanced a lot).
* **On agents requiring system reboots**: Correct. This is true for any data collection method except kernel agents. We can reword that some agents effect host systems differently.
* **On application monitoring**: It’s merely an extended feature. Mostly designed for only a small percentage of top apps, but it addresses big problems in those apps. Some customers want it off the shelf, so we have to include it. But yes, and DSP tool can cover much much more.
* ** On “rule based” policies**: We have an entire section under ‘advanced features’ on query white listing, accounting for just what you describe, which is materially different that attribute based query analysis described in the ‘rule based’ section. However we need to clean up Query White Listing section as it was supposed to use ‘reverse proxy’ as an example, not as a definitive deployment model. And it really should be ‘attribute based analysis” rather than “rule based”.
* ** On “heuristics”: We did not exhaust the possible benefits/detractors of each analysts category, rather we tried to focus on the strengths of the basic models. We’ve seen ‘heuristics’ work to help build a baseline for behavioral profiling and detection, but once again, this is not used for whitelisting and it’s a bad option for blocking. We don’t agree with the generalization that they never work as heuristics are the foundation of very common policies such as ‘too much data’ being selected or ‘not a query Bob normally issues’ are flagged.
* **“known vulnerabilities and missing security patches.”: That’s actually what we meant, but we realized we’ve been in this world so much that it could be misread as being more narrow, so we are rewording and likely will enumerate all the major features.
* **Whitelisting**: Yep- a mistake. Agents can also whitelist. Again, was meant to be an example, not the only deployment option.
* **Connection pooling**: We’ll broaden to include all known techniques.
* On PoC Success**: Good catch. We’ll include.
-Adrian
Large parts of this series tie together “DAM”, “FAM” and “WAF”. This trio is offered by only one vendor, and the attachment rate of the three separate products is extremely low. Customers will seldom see them as inter-related. We see other areas with a lot more attachment rate in all of our DAM (or DSP) projects with completely different products – SIEM, enterprise-wide management tools, IPS, and DLP to name some of the products that our customers purchase from us as a wider project that contains DAM/DSP. The fact that (for example) the diagram on page 6 shows FAM and database firewall and application security but does not mention SIEM, DLP, IPS, virtual systems’ security, enterprise security policy management is a strange choice and does not align with our customer’s experience.”
[We would also like to point out: ]
a. We believe that virtual patching for databases be included as a major requirement of any decent enterprise implementation of DAM/DSP
b. There is also a need to differentiate between host based agents that still sniff only the host network traffic to memory based agents:
i. Network host agents will not monitor any intra-DB traffic
ii. Network host agents are not autonomous – they need to transport back to a centralized appliance all local traffic for further analysis. This will not scale in enterprise deployments with distributed environments (data centers in multiple locations or databases in the cloud)
c. We also think that the question of whether the solution is [software or hardware] impact the buying decision; in some cases many appliances, or many intrusive agents is a very important factor that would impact both the implementation time, the implementation success, the cost of the implementation and the ongoing maintenance cost. If there is a need buy 100 days of professional consulting to set up the monitoring policies, and then additional consulting days when there is a need to expand the system to additional databases, while other solutions are self serviced – this is a major decision making factor.
d. You state [under network monitoring] that on a “positive note” [agent based network sniffers do not] require reboot, does not cause adverse side effects… This is also true about [memory monitoring, non-kernel agents] , and not true about many of the network based agents in the market. This should be addressed.
e. [Under] application monitoring. We would also note the shortcoming of this approach vs. database monitoring. With database monitoring you can pretty much cover most databases as we and other vendors have shown. Applications are a completely different scale. It is one thing to support SAP (thought even vendors who do that are limited in the number of versions and types of SAP they support), yet another to try to support even the top 10% applications. So – is it really a feasible solution for customers?
f. […] you give examples of selecting what activity to monitor, in our experience a much more prevalent requirements from customers is to only monitor access to a specific schema, or specific tables and monitor all access to those (sometimes excluding the application access).
g. [Under the section] “rule based” [policies]. We think you miss a very important point here. Rules allow customers to create white lists (in fact this is what most of our customers do). A white list can be monitoring all actions other than the ones executed by a list of apps, or it can be a rule that alerts when a user runs a query that is not one of 300 queries that the user is allowed to do, etc. We have the tools in our product to allow an easy way to translate monitored events into such white lists, which we believe are the best ways to monitor databases.
h. Same [with] “heuristic” [rule analysis]. You touch on it, but why not state very clearly that in dynamic production environments (we guess about 90% of the databases), heuristics simply do not work and only lead to both false positives and false negatives.
i. [For vulnerability assessment] you mention “known vulnerabilities and missing security patches.” Only the most basic assessment tools limit themselves to this. Other tools (like ours) look for thousands of other checks and scans, such as weak passwords, backdoors, vulnerable code stored on the database.
j. Whitelisting is NOT limited to reverse proxies only and as we mentioned above, we think it is the best way (sometimes combined with a few black list rules) to monitor databases, and certainly what our customers are doing with our solution.
k. Connection pooling – you mention “correlating”. This is an inferior methodology used by one vendor [and we feel there are better methods that ] deterministically show which end user executed each query.
l. [I]n evaluating products we believe you may have skipped probably the most important part ,which is that customers should talk to references their size and ask important questions such as “did you manage to deploy the product? How long did it take? Is it working as advertised?…
Jeez Mom – I’m sorry I threw away the tape recorder. But time and technology move on, and for that matter, so does DAM.
Call you soon,
Adrian.
Adrian you ignorant dork! What’s makes you think you can get rid of a perfectly good name like DAM, or go re-naming industry segments with the frequency of a cheap ham radio? Who do you think you are, Gartner? Oh, and call you mother more often. She’s worried about you riding that darned bicycle around in the desert.
@AnonymousTwo
1. I don’t doubt SQL Injection (web app security) and database firewall/virtual patching are principle drivers for those customers most interested in web application security, but from our perspective compliance dominates buying decisions in the SMB as well as enterprise sales. We’ll keep a closer eye on this during our customer briefings, and keep this data point in mind, but it does run counter to what we hear in general. See my above response to AnonymousOne as well.
2. I am not certain how you are defining ‘advanced auditing’ but from your description it sounds like before/after values. Several other vendors gather before and after information into the central event repository for auditing purposes. None of the vendors I am aware of provide immutable log capability — and why should they when vanishingly few customers ask for it — but most provide encryption of the data repository to prevent tampering. This remains a common RFI/RFP checklist item, but I see customers either leverage their ‘hardened appliance’ to keep the data safe, or they use some form of TDE encryption to encrypt the event log/database. That seems to be suitable for the customers I’ve spoken with, and that is consistent with the SIEM/Log Management customer requirements.
3. Good points on testing. I added this to internal testing and added a whole section on ‘final analysis’ to cover some of these issues.
And another good point on ease of use. I added this into the final analysis section as well.
4. I used to hear about collecting logon/logoff all of the time in RFI’s and customer inquires, but hear about it far less often now. I assumed that they were getting this data from somewhere else and it kind of dropped off the map — which could just be coincidental given the other issues that are more critical and taken the spotlight away from this requirement. I’ve included a blurb about this under compliance. We still see user requests for this but it’s limited to admins as most general users don’t log into the database directly. I’ll include in the report.
-Adrian
@AnonymousOne Thanks for the review.
* The paper is a bit compliance heavy. But we’ve got to recognize it’s still the dominant driver for security product sales. I’ve been seeing an increase in buying security for the sake of security as well. In fact I’d written about this in our pre-RSA guide as a trend, but my business partner’s shot me down with ‘twice nothing is nothing”. I don’t think security for the sake of security is increasing, but it’s less than 10% of the total database tools inquires we get. So it’s got a heart-beat, which was not there three years ago, but I think it’s worth adding the comment to the document.
* We’ve gotten this feedback that FAM appears to be too ‘front & center’ from others. Some feel the graphic makes FAM look like it is more important than it is, and it’s used in a couple of examples. We re-read the paper with this in mind and we think it’s a valid point; FAM’s over-emphasized. We’re changing the examples to have this more balanced representation and I’ve asked Rich to alter the placement and alphabetize the features in the graphic. Keep in mind that the reason we have FAM in examples is two-fold. First is the Sharepoint use case we are hearing more and more about, and DSP can cover by combining database and file monitoring techniques. The second is — and this is an assumption at this point — that FAM is the logical next step for NoSQL monitoring. These points seem to have been muted in the editing process and it’s my intention to get those points back into the paper, but make sure we don’t overstate the case or misrepresents adoption rates that are just not there.
* If a platform has monitoring, auditing, discovery, assessment, blocking and collects all of the critical events, it is. Clearly there are some grey shades here, and not every customer needs every facet of the available technology, so our categorization should not preclude one product from consideration, but this is the way that we see the market going.
* On false-positives I’ve just added some additional text in each of the three analysis techniques to do a better job conveying this point. I’m not sure I’m quite there, but I am planning on taking another pass and I’ll likely add more to the ‘history of DAM’ section as well. This is why we have more than just attribute based analysis!
* Network section has been edited to remove the comment about the agent impact to the target host and talk about the (potential) lack of compliance related information being gathered. We’ve not seen any more network performance impact from network agents as we have other types of agents that parrot back SQL statements to a collector.
* On discovery — good points. I’ll clarify.
* On active response vs. blocking: I’ll add as it should be mentioned — customers do use this in combination with alerting.
* I’ve made a bunch of changes to the selection process section of the paper, particularly around practical questions for internal testing and a final analysis section. There were a few errors in these sections that I’ve cleaned up as well.
-Adrian
1. Are you sure that’s the right order for buying criteria for the SMB? We see more demand for database security (SQL Injection protection, database firewall, monitoring, operations support, etc) than compliance as you state.
2. You don’t really talk much about auditing and advanced auditing techniques, especially around collection of before/after values for log entries. And you don’t really mention the importance of secure logs and immutable data.
3. With testing under the selection criteria section, can you add the need of more low maintenance and simplified product that doesn’t require to many resources involved in supporting the platform? We’re talking about ease-of-use ease-of-configuration and ease-of-maintenance of the product and the option to set it up and use it independently without services or consultants.
4. Audit of database logon/logoff is also very important. It allows you to investigate abnormal database activity (E.g. someone logs on to the database from a suspicious source or in a suspicious time).
* At first glance it seems kinda compliance heavy. Although compliance is still a key driver, we see security concerns increasing and being a greater motivator. Chalk it up to Anonymous or whoever, but this seems to be real based on our customer experience.
* There seems to be a lot of focus on FAM – – based on our previous conversations, you guys told us that FAM is cool, but the market isn’t ready for it and adoption is several years away. If that is still true – is FAM being overemphasized in this paper?
* Where do you draw the line for products to be considered DSP, vs. just DAM or assessment?
* I’d like to see more commentary on learning systems and the potential for false positives. It’s mentioned in a few places, but I’m not sure the risk associated with false positives is emphasized enough.
* Network Monitoring – is it worth making any commentary on the network traffic overhead generated or it’s impact on the network?
* Heuristic analysis section – same comment on false positives – they seem to be downplayed here. This is a very real concern and a fundamental difference in different scanning approaches.
* On database discovery – I think there are differences here in how people do discovery – specifically active discovery (scanning) vs passive discovery (basically identifying via monitoring the network). It’s worth highlighting and identifying the pros and cons of each. And you might want to mention value of being able to discover on “any port” vs. just the standard assigned database ports.
* Blocking – Few customer use blocking, wheras a great number of customers set monitoring policies to react — cutting off user accounts or integrate with other security tools vs. the ‘all or nothing blocking’. Blocking has more impact on the environment and false positives in this environment are more problematic.
* In the Deployment section, you don’t really get into issues of scalability and ease of deployment. This is one of the key things people tell us – especially those that have made the mistake of believing vendor claims and are now throwing them out. I think it’s a key consideration in an effective deployment and definitely a “real world” consideration.