Securosis

Research

Understanding and Selecting DSP: Core Features

So far this series has introduced Database Security Platforms, provided a full definition of DSP, discussed the origins and evolution of DAM to DSP, and described the technical platform architecture. We have covered the basics of a Database Security Platform. It might seem like a short list compared to all the other extended features we will cover later, but these are the most important ares, and the primary reasons to buy these tools. Activity Monitoring The single defining feature of Database Security Platforms is their ability to collect and monitor all database activity. This includes all administrator and system activity that touches data (short of things like indexing and other autonomous internal functions). We have already covered the various event sources and collection techniques used to power this monitoring, but let’s briefly review what kinds of activity these products can monitor: All SQL – DML, DDL, DCL, and TCL: Activity monitoring needs to include all interactions with the data in the database, which for most databases (even non-relational) involves some form of SQL (Structured Query Language). SQL breaks down into the Data Manipulation Language (DML, for select/update queries), the Data Definition Language (DDL, for creating and changing table structure), the Data Control Language (DCL, for managing permissions and such) and the Transaction Control Language (TCL, for things like rollbacks and commits). As you likely garnered from our discussion of event sources, depending on a product’s collection techniques, it may or may not cover all this activity. SELECT queries: Although a SELECT query is merely one of the DML activities, due to the potential for data leakage, SELECT statements are monitored particularly closely for misuse. Common controls examine the type of data being requested and the size of the result set, and check for SQL injection. Administrator activity: Most administrator activity is handled via queries, but administrators have a wider range of ways they can connect to database than regular users, and more ability to hide or erase traces of their activity. This is one of the biggest reasons to consider a DSP tool, rather than relying on native auditing. Stored procedures, scripts, and code: Stored procedures and other forms of database scripting may be used in attacks to circumvent user-based monitoring controls. DSP tools should also track this internal activity (if necessary). File activity, if necessary: While a traditional relational database relies on query activity to view and modify data, many newer systems (and a few old ones) work by manipulating files directly. If you can modify the data by skipping the Database Management System and editing files directly on disk (without breaking everything, as would happen with most relational systems), some level of monitoring is probably called for. Even with a DSP tool, it isn’t always viable to collect everything, so the product should support custom monitoring policies to select what types of activities and/or user accounts to monitor. For example, many customers deploy a tool only to monitor administrator activity, or to monitor all administrators’ SELECT queries and all updates by everyone. Policy Enforcement One of the distinguishing characteristics of DSP tools is that they don’t just collect and log activity – they analyze it in real or near-real time for policy violations. While still technically a detective control (we will discuss preventative deployments later), the ability to alert and respond in or close to real time offers security capabilities far beyond simple log analysis. Successful database attacks are rarely the result of a single malicious query – they involve a sequence of events (such as exploits, alterations, and probing) leading to eventual damage. Ideally, policies are established to detect such activity early enough to prevent the final loss-bearing act. Even when an alert is triggered after the fact, it facilitates immediate incident response, and investigation can begin immediately rather than after days or weeks of analysis. Monitoring policies fall into two basic categories: Rule-based: Specific rules are established and monitored for violation. They can include specific queries, result counts, administrative functions (such as new user creation and rights changes), signature-based SQL injection detection, UPDATE or other transactions by users of a certain level on certain tables/fields, or any other activity that can be specifically described. Advanced rules can correlate across different parts of a database or even different databases, accounting for data sensitivity based on DBMS labels or through registration in the DAM tool. Heuristic: Monitoring database activity builds a profile of ‘normal’ activity (we also call this “behavioral profiling”). Deviations then generate policy alerts. Heuristics are complicated and require tuning to work effectively. They are a good way to build a base policy set, especially with complex systems where manually creating deterministic rules by hand isn’t realistic. Policies are then tuned over time to reduce false positives. For well-defined systems where activity is consistent, such as an application talking to a database using a limited set of queries, they are very useful. Of course heuristics fail when malicious activity is mis-profiled as good activity. Aggregation and Correlation One characteristic which Database Security Platforms share with System Information and Event Management (SIEM) tools is their ability to collect disparate activity logs from a variety of database management systems – and then to aggregate, correlate, and enrich event data. The combination of multiple data sources across heterogenous database types enables more complete analysis of activity rather than working only on one isolated query at a time. And by understanding the Structured Query Language (SQL) syntax of each database platform, DSP can interpret queries and parse their meaning. While a simple SELECT statement might mean the same thing across different database platforms, each database management system (DBMS) is chock full of its own particular syntax. A DSP solution should understand the SQL for each covered platform and be able to normalize events so the analyst doesn’t need to know the ins and outs of each DBMS. For example, if you want to review all privilege escalations on all covered systems, a DSP tool will recognize those various events across platforms and present you with a complete report without you having to understand the SQL particulars of each one. Assessment We typically see three types of assessment

Share:
Read Post

iOS Data Security: Managed Devices

In our last post, on data security for partially-managed devices, I missed one option we need to cover before moving onto fully-managed devices: User-owned device with managed/backhaul network (cloud or enterprise) This option is an adjunct to our other data security tools, and isn’t sufficient for protecting data on its own. The users own their devices, but agree to route all traffic through an enterprise-managed network. This might be via a VPN back to the corporate network or through a VPN service. On the data security side, this enables you to monitor all network traffic – possibly including SSL traffic (by installing a special certificate on the device). This is more about malware protection and reducing the likelihood of malicious apps on the devices, but it also supports more complete DLP. Managed Devices When it comes to data security on managed devices, life for the security administrator gets a bit easier. With full control of the device we can enforce any policies we want, although users might not be thrilled. Remember that full control doesn’t necessarily mean the device is in a highly-restricted kiosk mode – you can still allow a range of activities while maintaining security. All our previous data security options are available here, as well as: MDM managed device with Data Protection Using a Mobile Device Management tool, the iOS device is completely managed and restricted. The user is unable to install unapproved applications, email is limited to the approved enterprise account, and all security settings are enabled for Data Protection. Restricting the applications allowed on the device and enforcing security policies makes it much more difficult for users to leak data through unapproved services. Plus you gain full Data Protection, strong passcodes, and remote wiping. Some MDM tools even detect jailbroken devices. To gain the full benefit of Data Protection, you need to block unapproved apps which could leak data (such as Dropbox and iCloud apps). This isn’t always viable, which is why this option is often combined with a captive network to give users a bit more flexibility. Managed/backhaul network with DLP, etc. The device uses an on-demand VPN to route all network traffic, at all times, through an enterprise or cloud portal. We call it an “on-demand” VPN because the device automatically shuts it down when there is no network traffic and brings it up before sending traffic – the VPN ‘coverage’ is comprehensive. “On-demand” here definitely does **not* mean users can bring the VPN up and down as they want. Combined with full device management, the captive network affords complete control over all data moving onto and off the devices. This is primarily used with DLP to manage sensitive data, but it may also be used for application control or even to allow use of non-enterprise email accounts, which are still monitored. On the DLP front, while we can manage enterprise email without needing a full captive network, this option enables us to also manage data in web traffic. Full control of the device and network doesn’t obviate the need for certain other security options. For example, you might still need encryption or DRM, as these allow use of otherwise insecure cloud and sharing services. Now that we have covered our security options, our next post will look at picking a strategy. Share:

Share:
Read Post

Watching the Watchers: Protect Credentials

As we continue our march through the Privileged User Lifecycle, we have provisioned the privileged users and restricted access to only the devices they are authorized to manage. The next risk to address is the keys or credentials of these privileged users (P-Users) falling into the wrong hands. The best access and entitlements security controls fail if someone can impersonate a P-User. But the worst risk isn’t even compromised credentials. It’s not having unique credentials in the first place. You must have seen the old admin password sharing scheme, right? It was used, mostly out of necessity, many moons ago. Administrators needed access to the devices they managed. But at times they needed help, so they asked a buddy to take care of something, and just gave him/her the credentials. What could possibly go wrong? We covered a lot of that in the Keys to the Kingdom. Shared administrative credentials open Pandora’s box. Once the credentials are in circulation you can’t get them back – which is a problem when an admin leaves the company or no longer has those particular privileges. You can’t deprovision shared credentials so you need to change them. PCI, as the low bar for security (just ask Global Payments), recognizes the issues with sharing IDs, so Requirement 8 is all about making sure anyone with access to protected data uses a unique ID, and that their use is audited – so you can attribute every action to a particular user. But that’s not all! (in my best infomercial voice). What about the fact that some endpoints could be compromised? Even administrative endpoints. So sending admin credentials to that endpoint might not be safe. And what happens when developers hard-code credentials into an applications? Why go through the hassle of secure coding – just embed the password right into the application! That password never changes anyway, so what’s the risk? So we need to protect credentials, as much as whatever they control. Credential Lockdown How can we protect these credentials? Locking the credentials away in a vault meets many of the requirements described above. First, if the credentials are stored in a vault, it harder for admins to share them. Let’s not put the cart before the horse, but this makes it pretty easy (and transparent) to change the password after every access, eliminating the sticky-note-under-keyboard risk. Going through the vault for every administrative credential access means you have an audit trail of who used which credentials (and presumably which specific devices they were managing) and when. That kind of stuff makes auditors happy. Depending on the deployment of the vault, the administrator may never even see the credentials, as they can be automatically entered on the server if you use a proxy approach to restricting access. And this also provides single sign-on to all managed devices, as the administrator authenticates (presumably using multiple factors) to the proxy, which interfaces directly to the vault again, transparently to the user. So even an administration device teeming with malware cannot expose critical credentials. Similarly, an application can make a call to the vault, rather than hard-coding credentials into the app. Yes, the credentials still end up on the application server, but that’s still much better than hard-coding the password. So are you sold yet? If you worry about credentials being access and misused, a password vault provides a good mechanism for protecting them. Define Policies As with most things in security, using a vault involves both technology and process. We will tackle the process first, because without a good process even the best technology has no chance. So before you implement anything you need to define the rules of (credential) engagement. You need to answer some questions. Which systems and devices need to be involved in the password management system? This may involve servers (physical and/or virtual), network & security devices, infrastructure services (DNS, Directory, mail, etc.), databases, and/or applications. Ideally your vault will natively support most of your targets, but broad protection is likely to require some integration work on your end. So make sure any solution you look at has some kind of API to facilitate this integration. How does each target use the vault? Then you need to decide who (likely by group) can access each target, how long they are allowed to use the credentials (and manage the device), and whether they need to present additional authentication factors to access the device. You’ll also define whether multiple administrators can access managed devices simultaneously and whether to change the password after each check-in/check-out cycle. Finally, you may need to support external administrators (for third party management or business partner integration), so keep that in mind as you work through these decisions. What kind of administrator experience makes sense? Then you need to figure out the P-User interaction with the system. Will it be via a proxy login, where the user never sees the credentials, or will there be a secure agent on the device to receive and protect the credential? Figure out how the vault supports application-to-database and application-to-application interaction, as those are different than supporting human admins. You’ll also want to specify which activities are audited and how long audit logs are kept. Securing the Vault If you are putting the keys to the kingdom in this vault, make sure it’s secure. You probably will not bring a product in and set your application pen-test ninjas loose on it, so you are more likely to rely on what we call the sniff test. Ask questions to see whether the vendor has done their homework to protect the vault. You should understand the security architecture of the vault. Yes, you may have to sign a non-disclosure agreement to see the details, but it’s worth it. You need to know how they protect things. Discuss the threat model(s) the vendor uses to implement that security architecture. Make sure they didn’t miss any obvious attack vectors. You also need to poke around their development process a bit and make sure they have a proper SDLC and actually test for security defects before

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.