Securosis

Research

Understanding and Selecting DSP: Administration

Today’s post focuses on the administering Database Security Platforms. Conceptually DSP is pretty simple: collect data from databases, analyze it according to established rules, and react when a rule has been violated. The administrative component of every DSP platform follows these three basic tasks: data management, policy management, and workflow management. In addition to these three basic functions, we also need to administer the platform itself, as we do with any other application platform. As we described in our earlier post on DSP technical architecture, DSP sends all collected data to a central server. The DAM precursors evolved from single servers, to two-tiered architectures, and finally into a hierarchal model, in order to scale up to enterprise environments. The good news is that system maintenance, data storage, and policy management are all available from a single console. While administration is now usually through a browser, the web application server that performs the work is built into the central management server. Unlike some other security products, not much glue code or browser tricks is required to stitch things together. System Management User Management: With access to many different databases, most filtering and reporting on sensitive data, user management is critical for security. Establishing who can make changes to policies, read collected data, or administer the platform are all specialized tasks, and these groups of users are typically kept separate. All DSP solutions offer different methods for segregating users into different groups, each with differing granularity. Most of the platforms offer integration with directory services to aid in user provisioning and assignment of roles. Collector/Sensor/Target Database Management: Agents and data collectors are managed from the central server. While data and policies are stored centrally, the collectors – which often enforce policy on the remote database – must periodically synch with the central server to update rules and settings. Some systems require the administrator to ‘push’ rules out to agents or remote servers, while others synch automatically. Systems Management: DSP is, in and of itself, and application platform. It has web interfaces, automated services, and databases like most enterprise applications. As such it requires some tweaking, patching, and configuration to perform its best. For example, the supporting database may need pruning to clear out older data, vendor assessment rules require updates, and the system may need additional resources for data storage and reports. The system management interface is provided via a web browser, but only available to authorized administrators. Data Aggregation & Correlation The one characteristic Database Activity Monitoring solutions share with log management, and even Security Information and Event Management, tools is their ability to collect disparate activity logs from a variety of database management systems. They tend to exceed the capabilities of related technologies in their ability to go “up the stack” in order to gather deeper database activity application layer data, and in their ability to correlate information. Like SIEM, DSP aggregates, normalizes, and correlates events across many heterogenous sources. Some platforms even provide an optional ‘enrichment’ capability by linking audit, identity and assessment data to event records. For example, providing both ‘before’ and ‘after’ data values for a suspect query. Despite central management and correlation features, the similarities with SIEM end there. By understanding the Structured Query Language (SQL) of each database platform, these platforms can interpret queries and understand their meaning. While a simple SELECT statement might mean the same thing across different database platforms, each database management system (DBMS) is full of its own particular syntax. DSP understands the SQL for each platform is able to normalize events so the user doesn’t need to know the ins and outs of each DBMS. For example, if you want to review all privilege escalations on all covered systems, a DAM solution will recognize those events, regardless of platform and present a complete report, without you having to understand the SQL. A more advanced feature is to then correlate activity across different transactions and platforms, rather than looking only at single events. For example, some platforms recognize a higher than normal transaction volume by a particular user, or (as we’ll consider in policies) can link a privilege escalation event with a large SELECT query on sensitive data, which could indicate an attack. All activity is also centrally collected in a secure repository to prevent tampering or a breach of the repository itself. Since they collect massive amounts of data, DSPs must support automatic archiving. Archiving should support separate backups of system activity, configuration, policies, alerts, and case management; and encrypt under separate keys to support separation of duties. Policy Management All platforms come with sets of pre-packaged policies for security and compliance. For example, every product contains hundreds, if not thousands, of assessment policies that identify vulnerabilities. Most platforms come with pre-defined policies for monitoring standard deployments of databases behind major applications such as Oracle Financials and SAP. Built-in policies for PCI, SOX, and other generic compliance requirements are also available to help you jump-start the process and save many hours of policy building. Every single policy has the built-in capability of generating an alert if the rule is violated – usually through email, instant message or some other messaging capability. Note that every user needs to tune or customize a subset of pre-existing policies to match their environment, and create others to address specific risks to their data. They are still far better than starting from scratch. Activity monitoring policies include user/group, time of day, source/destination, and other important contextual options. And these policies should offer different analysis techniques based on attributes, heuristics, context, and content analysis. They should also support advanced definitions, such as complex multi-level nesting and combinations. If a policy violation occurs you can specify any number of alerting, event handling and reactive actions. Ideally, the platform will include policy creation tools that limit the need to write everything out in SQL or some other definition language; it’s much better if your compliance team does not need to learn SQL programming to create policies. You can’t avoid having to do some things

Share:
Read Post

How to Tell If Your Cloud Provider Can Read Your Data (Hint: They Can)

Over at TidBITS today I published a non-security-geek oriented article on how to tell if your cloud provider can read your data. Since many of you are security geeks, here’s the short version (mostly cut and paste) and some more technical info. The short version? If you don’t encrypt it and manage keys yourself, of course someone on their side can read it (99+% of the time). There are three easy indicators that your cloud provider (especially SaaS providers) can read your data: If you can see your data in a web browser after entering only your account password, the odds are extremely high that your provider can read it as well. The only way you could see your data in a web browser and still have it be hidden from your provider would require complex (fragile) JavaScript code, or a Flash/Java/ActiveX control to decrypt and display the data locally. If the service offers both web access and a desktop application, and you can access your data in both with the same account password, the odds are high that your provider can read your data. The common access indicates that your account password is probably being used to protect your data (usually your password is used to unlock your encryption key). While your provider could architect things so the same password is used in different ways to both encrypt data and allow web access, that doesn’t really happen. If you can access the cloud service from a new device or application by simply providing your user name and password, your provider can probably read your data. This is how I knew Dropbox could read my files long before that story hit the press. Once I saw that I could log in and see my files, or view them on my iPad without using an encryption key other than my account password, I knew that my data was encrypted with a key Dropbox that manages. The same goes for the enterprise-focused file sharing service Box (even though it’s hard to tell from reading their site). Of course, since Dropbox stores just files, you can apply your own encryption before Dropbox ever sees your data, as I explained last year. And iCloud? With iCloud I have a single user name and password. Apple offers a rich and well-designed web interface where I can manage individual email messages, calendar entries, and more. I can register new devices and computers with the same user name and password I use on the web site. So it has always been clear that Apple could read my content, just as Ars Technica reported recently (with quotes from me). That doesn’t mean that Dropbox, iCloud, and similar services are insecure. They generally have extensive controls – both technical and policy restrictions – to keep employees from snooping. But such services aren’t suitable for all users in all cases – especially for businesses or governmental organizations that are contractually or legally obligated to keep certain data private. Now let’s think beyond consumer services, about the enterprise side. Salesforce? Yep – of course they can read your data (unless you add an encryption proxy). SaaS services nearly always – so they can do stuff with your data. PaaS? Same deal (again, unless you do the encryption yourself). IaaS? Of course – your instance needs to boot up somehow, and if you want attached volumes to be encrypted you have to do it yourself. The main thing for Securosis readers to understand is that the vast majority of consumer and enterprise cloud services that mention encryption or offer encryption options, manage your keys for you, and have full access to your data. Why offer encryption at all then, if it doesn’t really improve security? Compliance. It wipes out one risk (lost hard drives), and reduces compliance scope for physical handling of the storage media. It also looks god on a checklist. Take Amazon S3 – Amazon is really clear that although you can encrypt data, they can still read it. I suppose the only reason I wrote this post and the article is because I’m sick of the “iWhatever service can read your data” non-stories that seem to crop up all the time. Duh. Share:

Share:
Read Post

Vulnerability Management Evolution: Scanning the Application Layer

In our last Vulnerability Management Evolution post we discussed scanning infrastructure, which remains an important part of vulnerability management. But we recognize that most attacks target applications directly, so we can no longer just scan the infrastructure and be done with it. We need to climb the stack and pay attention to the application layer, looking for vulnerabilities in application as well as the supporting components. But that requires us to define an ‘application’, which is surprisingly difficult. A few years ago, the definition of application was fairly straightforward. Even in an N-tier app, with a variety of application servers and data stores, you largely controlled all the components of the application. Nowadays, not so much. Pre-assembled web stacks, open source application servers, third party crypto libraries, and cloud-provided services all make for quick application development, but blur the line between your application and the supporting infrastructure. You have little visibility into what’s going on behind the curtain, but you’re still responsible for securing it. For the purposes of our vulnerability/threat management discussion, we define the app as presentation and infrastructure. The presentation layer focuses on assembling information from a number of different sources – either internal or external to your enterprise. The user of the application couldn’t care less about where the data comes from. So from a threat standpoint you need to assess the presentation code for issues that put devices at risk. But your focus on reducing attack surface of applications also requires you to pay attention to the infrastructure. That means the application servers, interfaces, and databases that assemble the data presented by the application. So you scan application servers and databases to find problems. Let’s dig into the two aspects of the application layer to assess: databases and application infrastructure. Database Layer Assessing databases is more similar to the scanning infrastructure than applications – you look for vulnerabilities in the DBMS (database management system). As with other infrastructure devices, databases can be misconfigured and might have improper entitlements, all of which pose risks to your environment. So assessment needs to focus on whether appropriate database patches have been installed, the configuration of the database, improper access control, entitlements, etc… Let’s work through the key steps in database assessment: Discovery: First you need to know where your databases are. That means a discovery process, preferably automated to find both known and unknown databases. You need to be wary of shadow IT, where lines of business and other groups build their own data stores – perhaps without the operational mojo of your data center group. You should also make sure you are continuously searching for new databases because they can pop up anywhere, at any time, just like rogue access points – and they do. Vulnerabilities: You will also look for vulnerabilities in your DBMS platform, which requires up-to-date tests for database issues. Your DB assessment provider should have a research team to keep track of the newest and latest attacks on whatever database platforms you use. Once something is found, information about exposure and workarounds & remediations, is critical for making your job easier. Configurations: Configuration checking a DBMS is slightly different – you are assessing mostly internals. Be sure to you check the database both with credentials (as an authorized user) and without credentials (which more accurately represents a typical outside attacker). Both scenarios are common in database attacks, so make sure your configuration is sufficiently locked against both of them. Access Rights and Entitlements: Aside from default accounts and passwords, focus your efforts on making sure no users (neither humans nor applications) have additional entitlements that put the database platform at risk. For example, you need to ensure credentials of de-provisioned users have been removed and that accounts which only need read access don’t have the ability to DROP TABLES. And you need to verify that users – especially administrators – cannot ‘backdoor’ the database through local system privileges. Part of this is housekeeping, but you need to pay attention – make sure your databases are configured correctly to avoid unnecessary risk. Finally, we know this research focuses more on vulnerability/threat identification and assessment, but over time you will see even tighter integration between evolved vulnerability/threat management platforms and tactics to remediate problems. We have written a detailed research report on Database Assessment, and you should track our Database Security Platform research closely so you can shorten your exposure window by catching problems and taking action more quickly. Application Layer Application assessment (especially of web applications) is a different animal. Mostly because you have to actually ‘attack’ the application to find vulnerabilities, which might exist within the application code or the infrastructure components it is built on. Obviously you need to crawl through the app to find issues to fix issues. There are a several different types of app security testing (as discussed in Building a Web App Security Program), so we will just summarize here. Platform Vulnerabilties: This is the stuff we check for when scanning infrastructure and databases. Applications aren’t ‘stand-alone’ – they depend on infrastructure and inherit vulnerabilities from their underlying components. The clearest example is a content management system, where a web app built on Drupal inherits all the vulnerabilities of Drupal, unless they are somehow patched worked around. Static Application Security Testing (SAST): Also called “white box testing”, SAST involves developers analyzing source to identify coding errors. This is not normally handled by security teams – it is normally part of a secure development lifecycle (SDLC). Dynamic Application Security Testing (DAST): Also known as “black box testing”, DAST is the attempt to find application defects using bad inputs, using fuzzing and other techniques. This doesn’t involve access to the source code, so some security teams get involved in DAST, but it is still largely seen as a development responsibility because thorough DAST testing can be destructive to the app, and so shouldn’t be used on production applications. Web App Scanners But the technology most relevant to the evolution of vulnerability management is the web application scanner. Many of the available vulnerability management offerings offer an add-on capability to scan applications and their underlying infrastructures to identify

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.